report
stringlengths
320
1.32M
summary
stringlengths
127
13.7k
The Recovery Act appropriated $4 billion for the Clean Water SRF program. This funding represents a significant increase compared with federal funds awarded as annual appropriations to the SRF program in recent years. From fiscal years 2000 through 2009, annual appropriations averaged about $1.1 billion for the Clean Water SRF program. Established in 1987, EPA’s Clean Water SRF program provides states and local communities independent and permanent sources of subsidized financial assistance, such as low- or no-interest loans for projects that protect or improve water quality and that are needed to comply with federal water quality regulations. In addition to providing increased funds, the Recovery Act included some new requirements for the SRF programs. For example, states were required to have all Recovery Act funds awarded to projects under contract within 1-year of enactment—which was February 17, 2010—and EPA was directed to reallocate any funds not under contract by that date. In addition, under the Recovery Act, states should give priority to projects that were ready to proceed to construction within 12 months of enactment. States were also required to use at least 20 percent of funds as a “green reserve” to provide assistance for green infrastructure projects, water or energy efficiency improvements, or other environmentally innovative activities. Further, states were required to use at least 50 percent of Recovery Act funds to provide assistance in the form of, for example, principal forgiveness or grants. These types of assistance are referred to as additional subsidization and are more generous than the low- or no-interest loans that the Clean Water SRF programs generally provide. The 14 states we reviewed for the Clean Water SRF program met all Recovery Act requirements specific to the Clean Water SRF. Specifically, the states we reviewed had all projects under contract by the 1-year deadline and also took steps to give priority to projects that were ready to proceed to construction within 12 months of enactment of the Recovery Act. Eighty-seven percent of Clean Water SRF projects were under construction within 12 months of enactment. In addition, the 14 Clean Water SRFs we reviewed exceeded the 20 percent green reserve requirement, using 29 percent of Recovery Act SRF funds in these states to provide assistance for projects that met EPA criteria for the green reserve. These states also met or exceeded the 50 percent additional subsidization requirement; overall, the 14 states distributed a total of 79 percent of Recovery Act Clean Water SRF funds as additional subsidization. SRF officials in most of the states we reviewed said that they faced challenges in meeting Recovery Act requirements, especially the 1-year contracting deadline. Under the base program, it could take up to several years from when funds are awarded before the loan agreement is signed, according to EPA officials. Some SRF officials told us that the compressed time frame imposed by the Recovery Act posed challenges and that their workloads increased significantly as a result of the 1-year deadline. Among the factors affecting workload are the following: Reviewing applications for Recovery Act funds was burdensome. Officials in some states said that the number of applications increased significantly, in some cases more than doubling compared with prior years, and that reviewing these applications was a challenge. For example, New Jersey received twice as many applications than in past years, according to SRF officials in that state. Explaining new Recovery Act requirements was time-consuming. Because projects that receive any Recovery Act funds must comply with Buy American requirements and Davis-Bacon wage requirements, state SRF officials had to take additional steps to ensure that both applicants for Recovery Act funds and those awarded Recovery Act funds understood these requirements. Applicants and subrecipients required additional support. Many states took steps to target Recovery Act funds to new recipients, including nontraditional recipients of Clean Water SRF funds, such as disadvantaged communities. According to SRF officials in some states, new applicants and subrecipients required additional support in complying with SRF program and Recovery Act requirements. In the states we reviewed, nearly half of Clean Water SRF subrecipients had not previously received assistance through that program. Project costs were difficult to predict. Officials in some states told us that actual costs were lower than estimated for many projects awarded Recovery Act funds and, as a result, some states had to scramble to ensure that all Recovery Act funds were under contract by the 1-year deadline. For example, in January 2010, officials from Florida’s SRF programs told us that a few contracts for Recovery Act-funded projects in the state had come in below their original project cost estimates, and that this was likely to be the program staff’s largest concern as the deadline approached. However, lower estimates also allowed some states to undertake additional projects that they would otherwise have been unable to fund with the Recovery Act funding. States used a variety of techniques to address these workload concerns and meet the 1-year contracting deadline, according to state SRF officials with whom we spoke. Some states hired additional staff to help administer the SRF programs, although SRF officials in other states told us that they were unable to do so because of resource constraints. For example, New Jersey hired contractors to help administer the state’s base Clean Water SRF funds, allowing experienced staff to focus on meeting Recovery Act requirements, according to SRF officials in that state. Moreover, some states hired contractors to provide assistance to both applicants and subrecipients. For example, California hired contractors—including the Rural Community Assistance Corporation—to help communities apply for Recovery Act funds. Furthermore, states took steps to ensure that they would have all Recovery Act funds under contract even if projects dropped out because of Recovery Act requirements or time frames. For example, most of the states we reviewed awarded a combination of Recovery Act and base funds to projects to allow for more flexibility in shifting Recovery Act funds among projects. States also used a variety of techniques to ensure that they would meet the green reserve requirement. For example, some of the states we reviewed conducted outreach to communities and nonprofit organizations to solicit applications for green projects. Moreover, to make green projects more attractive to communities, some states offered additional subsidization to all green projects or relied on a small number of high-cost green projects to meet the requirement. For example, Mississippi officials told us that the state funded three large energy efficiency projects that helped the state’s Clean Water SRF program meet the green reserve requirement. The 14 states we reviewed distributed nearly $2 billion in Recovery Act funds among 890 water projects through their Clean Water SRF program. These states took a variety of approaches to distributing funds. For example, four states distributed at least 95 percent of Recovery Act funds as additional subsidization, while three other states distributed only 50 percent as additional subsidization, the smallest amount permitted under the Recovery Act. Overall, these 14 states distributed approximately 79 percent of Clean Water SRF Recovery Act funds as additional subsidization, with most of the remaining funds provided as low- or no- interest loans that will recycle back into the programs as subrecipients repay their loans. As the funds are repaid, they can then be used to provide assistance to SRF recipients in the future. Furthermore, states varied in the number of projects they chose to fund. For example, Ohio distributed approximately $221 million among 274 Clean Water SRF projects, while Texas distributed more than $172 million among 21 projects. Some states funded more projects than originally anticipated because other projects were less costly than expected, according to officials. For example, Texas was able to provide funds for two additional clean water projects because costs—especially material costs—were lower than anticipated for other projects. States we reviewed used at least 40 percent of Recovery Act Clean Water SRF project funds ($787 million) to provide assistance for projects that serve disadvantaged communities. Most of the states we reviewed took steps to target some or all Recovery Act funds to these low-income communities, generally by considering a community’s median household income when selecting projects and determining which projects would receive additional subsidization in the form of principal forgiveness, negative interest loans, or grants. According to state officials from nine Clean Water SRF programs, 50 percent of all projects funded by those states’ SRF programs serve disadvantaged communities, and all of these disadvantaged communities were provided with additional subsidization. SRF officials in some states told us that Recovery Act funds—especially in the form of additional subsidization—have provided significant benefits to disadvantaged communities in their states. For example, according to officials from California’s Clean Water SRF program, that state used funds to provide assistance for 25 wastewater projects that serve disadvantaged communities, and approximately half of these projects would not have gone forward as quickly or at all without additional subsidization. Officials from the City of Fresno confirmed that one of these projects—which will replace septic systems with connections to the city’s sewer systems in two disadvantaged communities—would not have gone forward without additional subsidization. Local officials told us that this project will decrease the amount of nitrates in the region’s groundwater, which is the source of the city’s drinking water. The Clean Water SRF programs from the 14 states we reviewed used Recovery Act funds to provide assistance for 890 projects that will meet a variety of local needs. Figure 1 shows how the 14 states distributed Recovery Act funds across various clean water categories. In the states we reviewed, the Clean Water SRF programs used more than 70 percent of Recovery Act project funds to provide assistance for projects in the following categories: Secondary treatment and advanced treatment. States we reviewed used nearly half of all Recovery Act project funds to support wastewater infrastructure intended to meet or exceed EPA’s secondary treatment standards for wastewater treatment facilities. Projects intended to achieve compliance with these standards are referred to as secondary treatment projects, while projects intended to exceed compliance with these standards are referred to as advanced treatment projects. For example, Massachusetts’ Clean Water SRF program awarded over $2 million in Recovery Act funds to provide upgrades intended to help the City of Leominster’s secondary wastewater treatment facility achieve compliance with EPA’s discharge limits for phosphorous. Sanitary sewer overflow and combined sewer overflow. States we reviewed used about 25 percent of Recovery Act project funds to support efforts to prevent or mitigate discharges of untreated wastewater into nearby water bodies. Such sewer overflows, which can occur as a result of inclement weather, can pose significant public health and pollution problems, according to EPA. For example, Pennsylvania used 56 percent of project funds to address sewer overflows from municipal sanitary sewer systems and combined sewer systems. In another example, Iowa’s Clean Water SRF program used Recovery Act funds to help the City of Garwin implement sanitary sewer improvements. Officials from that city told us that during heavy rains, untreated water has bypassed the city’s pump station and backed up into basements of homes and businesses, and that the city expects all backups to be eliminated as a result of planned improvements. In addition to funding conventional wastewater treatment projects, 9 of the 14 Clean Water SRF programs we reviewed used Recovery Act funds to provide assistance for projects intended to address nonpoint source pollution—projects intended to protect or improve water quality by, for example, controlling runoff from city streets and agricultural areas. The Clean Water SRF programs we reviewed used 8 percent of project funds to support these nonpoint source projects, but nonpoint source projects account for 20 percent (179 out of 890) of all projects. A large number of these projects—131 out of 179—were initiated by California or Ohio. For example, California used Recovery Act funds to provide assistance for the Tomales Bay Wetland Restoration and Monitoring Program, which restores wetlands that had been converted into a dairy farm. Figure 2 shows the number of projects that fall into various clean water categories. Of the 890 projects awarded Recovery Act funds by the Clean Water SRF programs in the states we reviewed, more than one-third (312) address the green reserve requirement. Of these green projects, 289 (93 percent) were awarded additional subsidization. Figure 3 shows the number of projects that fall into each of the four green reserve categories included in the Recovery Act. Many of these projects are intended to improve energy efficiency and are expected to result in long-term cost savings for some communities as a result of these improvements. For example, the Massachusetts Water Resources Authority is using Recovery Act funds provided through that state’s Clean Water SRF program to help construct a wind turbine at the DeLauri Pump Station, and the Authority estimates that, as a result of this wind turbine, more than $350,000 each year in electricity purchases will be avoided. Furthermore, some projects provide green alternatives for infrastructure improvements. For example, New York’s Clean Water SRF program provided Recovery Act funds to help construct a park designed to naturally filter stormwater runoff and reduce the amount of stormwater that enters New York City’s sewers. More than half of the city’s sewers are combined sewers, and during heavy rains, sewage sometimes discharges into Paerdagat Basin, which feeds into Jamaica Bay. EPA has modified its existing oversight of state SRF programs by planning additional performance reviews beyond the annual reviews it is already conducting, but these reviews do not include an examination of state subrecipient monitoring procedures. Specifically, EPA is conducting midyear and end-of-year Recovery Act reviews in fiscal year 2010 to assess how each state is meeting Recovery Act requirements. As part of these reviews, EPA has modified its annual review checklist to incorporate elements that address the Recovery Act requirements. Further, EPA officials will review four project files in each state for compliance with Recovery Act requirements and four federal disbursements to the state to help ensure erroneous payments are not occurring. According to EPA officials, through these added reviews, EPA is providing additional scrutiny over how states are using the Recovery Act funds and meeting Recovery Act requirements as compared with base program funds. As of May 14, 2010, EPA completed field work for its mid-year Recovery Act reviews in 13 of the states we reviewed and completed final reports for 3 of these states (Iowa, Ohio, and Pennsylvania). EPA has plans to begin field work in the final state at the end of May 2010. Although the frequency of reviews has increased, these reviews do not examine state subrecipient monitoring procedures. In 2008, the EPA Office of Inspector General (OIG) examined state SRF programs’ compliance with subrecipient monitoring requirements of the Single Audit Act and found that states complied with the subrecipient monitoring requirements but that EPA’s annual review process did not address state subrecipient monitoring procedures. The OIG suggested that EPA include a review of how states monitor borrowers as part of its annual review procedures. EPA officials told us that they agreed with the idea to include a review of subrecipient monitoring procedures as part of the annual review but have not had time to implement this suggestion because EPA’s SRF program officials have focused most of their attention on the Recovery Act since the OIG published its report. EPA officials also told us that they believe the reviews of project files and federal disbursements could possibly identify internal control weaknesses that may exist for financial controls, such as weaknesses in subrecipient monitoring procedures. These reviews occur as part of the Recovery Act review and aim to assess a project’s compliance with Recovery Act requirements and help ensure that no erroneous payments are occurring. In terms of state oversight of subrecipients, EPA has not established new subrecipient monitoring requirements for Recovery Act-funded projects, according to EPA officials. Under the base Clean Water SRF program, EPA gives states a high degree of flexibility to operate their SRF programs based on each state’s unique needs and circumstances in accordance with federal and state laws and requirements. According to EPA officials, although EPA has established minimum requirements for subrecipient monitoring, such as requiring states to review reimbursement requests, states are allowed to determine their own subrecipient monitoring procedures, including the frequency of project site inspections. While EPA has not deviated from this approach with regard to monitoring Recovery Act-funded projects, it has provided states with voluntary tools and guidance to help with monitoring efforts. For example, EPA provided states with an optional inspection checklist to help states evaluate a subrecipient’s compliance with Recovery Act requirements, such as the Buy American and job reporting requirements. EPA has also provided training for states on the Recovery Act requirements. For example, as of May 14, 2010, EPA has made available 11 on-line training sessions (i.e.webcasts) for state officials in all states to help them understand the Recovery Act requirements. EPA has also provided four workshops with on-site training on its inspection checklist for state officials in California, Louisiana, New Mexico, and Puerto Rico. Although EPA has not required that states change their subrecipient oversight approach, many states have expanded their existing monitoring procedures in a variety of ways. However, the oversight procedures may not be sufficient given that (1) federal funds awarded to each state under the Recovery Act have increased as compared with average annually awarded amounts; (2) all Recovery Act projects had to be ready to proceed to construction more quickly than projects funded with base SRF funds; and (3) EPA and states had little previous experience with some of the Recovery Act’s new requirements, such as Buy American provisions, according to EPA officials. The following are ways in which oversight procedures may not be sufficient: Review procedures for job data. According to OMB guidance on Recovery Act reporting, states should establish internal controls to ensure data quality, completeness, accuracy, and timely reporting of all amounts funded by the Recovery Act. We found that most states we reviewed had not developed review procedures to verify the accuracy of job figures reported by subrecipients using supporting documentation, such as certified payroll records. As a result, states may be unable to verify the accuracy of these figures. For example, Mississippi SRF officials told us that they do not have the resources to validate the job counts reported by comparing them against certified payroll records. In addition, during interviews with some subrecipients, we found inconsistencies among subrecipients on the types of hours that should be included and the extent that they verified job data submitted to them by contractors. For example, in New Jersey one subrecipient told us they included hours worked by the project engineer in the job counts, while another subrecipient did not. Review procedures for loan disbursements. According to EPA officials, the agency requires states to verify that all loan payments and construction reimbursements are for eligible program costs. In addition, according to EPA guidance, states often involve technical staff who are directly involved in construction inspections to help verify disbursement requests because they have additional information, such as the status of construction, that can help accurately approve these requests. However, we found that in two states we reviewed, technical or engineering staff did not review documentation supporting reimbursement requests from the subrecipient to ensure they were for legitimate project costs. For example, officials in Pennsylvania told us that technical staff from the state’s Department of Environmental Protection—which provides technical assistance to SRF subrecipients—do not verify monthly payments to subrecipients that are made by the Pennsylvania Infrastructure Investment Authority, the state agency with funds management responsibility for the state’s SRF programs. Instead, Department of Environmental Protection staff approve project cost estimates prior to loan settlement, when they review bid proposals submitted by contractors, and Pennsylvania Infrastructure Investment Authority officials verify monthly payments against the approved cost estimates. Inspection procedures. According to EPA officials, the agency requires that SRF programs have procedures to help ensure subrecipients are using Recovery Act SRF funding for eligible purposes. While EPA has not established required procedures for state project inspections, it has provided states its optional Recovery Act inspection checklist to help them evaluate a subrecipient’s compliance with Recovery Act requirements, such as the Buy American and job reporting requirements. Some states we reviewed have adopted EPA’s Recovery Act inspection checklist procedures and modified their procedures accordingly. For example, California and Arizona plan to implement all elements of EPA’s checklist for conducting inspections of Recovery Act projects, according to officials in these states. Other states have modified their existing inspection procedures to account for the new Recovery Act requirements. For example, officials from Georgia said they added visual examination of purchased materials and file review steps to their monthly inspections to verify that subrecipients are complying with the Buy American provision. In contrast, the Pennsylvania Department of Environmental Protection’s inspection procedures do not include a review of Recovery Act requirements. For example, we found that inspection reports for three Recovery Act projects we visited in Pennsylvania do not include inspection elements that covered Davis-Bacon or Buy American provisions. Instead, the Pennsylvania Infrastructure Investment Authority requires subrecipients to self-certify their compliance with these Recovery Act requirements when requesting payment from the state’s funds disbursement system. Registered professional engineers who work for the subrecipients must sign off on these self-certifications and subrecipients could face loss of funds if a certification is subsequently found to be false, according to the Executive Director of the Authority. Frequency and timing of inspections. According to EPA officials, the agency does not have requirements on how often a state SRF program must complete project inspections, and the frequency and complexity of inspections vary by state for the base SRF program. Officials from several states told us they have increased the frequency of project site inspections. For example, Colorado SRF officials said the state is conducting quarterly project site inspections of each of the state’s Recovery Act funded SRF projects, whereas under the state’s base SRF programs, Colorado inspects project sites during construction only when the state has concerns. However, we found that two states either did not conduct site inspections of some projects that are complete or had not yet inspected projects that were near completion. For example, as of April 19, 2010, Ohio EPA had inspected about 41 percent of its Clean Water SRF projects, but our review of Ohio’s inspection records showed that at least 6 projects are complete and have not been inspected, and a number of others are nearing completion and have not been inspected. Monitoring compliance with Recovery Act requirements. We found issues in several states during interviews with SRF subrecipients that suggest uncertainty about subrecipients’ compliance with Recovery Act requirements. For example, we interviewed one subrecipient in Ohio whose documentation of Buy American compliance raised questions as to whether all of the manufactured goods used in its project were produced domestically. In particular, the specificity and detail of the documentation provided about one of the products used left questions as to whether it was produced at one of the manufacturer’s nondomestic locations. Further, another subrecipient in Ohio was almost 2 months late in conducting interviews of contractor employees to ensure payment of Davis-Bacon wages. In summary, EPA and the states successfully met the Recovery Act deadlines for having all projects under contract by the 1-year deadline, and almost all Clean Water SRF projects were under construction by that date as well. Furthermore, Recovery Act funds were distributed to many new recipients and supported projects that serve disadvantaged communities. In addition, Recovery Act Clean Water SRF program funds have supported a variety of projects that are expected to provide tangible benefits to improving local water quality. However, as demonstrated in the above examples, the oversight mechanisms used by EPA and the states may not be sufficient to ensure compliance with all Recovery Act requirements. The combination of a large increase in program funding, compressed time frames, and new Recovery Act requirements present a significant challenge to EPA’s current oversight approach. As a result, we recommended that the EPA Administrator work with the states to implement specific oversight procedures to monitor and ensure subrecipients’ compliance with the provisions of the Recovery Act-funded Clean Water and Drinking Water SRF programs. EPA neither agreed nor disagreed with this recommendation. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions that you or other Members of the Committee might have. For further information regarding this statement, please contact David C. Trimble at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this statement include Nancy Crothers, Elizabeth Erdmann, Brian M. Friedman, Gary C. Guggolz, Emily Hanawalt, Carol Kolarik, and Jonathan Kucskar. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The American Recovery and Reinvestment Act of 2009 (Recovery Act) included $4 billion for the Environmental Protection Agency's (EPA) Clean Water State Revolving Fund (SRF). This testimony--based on GAO's report GAO-10-604 , issued on May 26, 2010, in response to a mandate under the Recovery Act--addresses (1) state efforts to meet requirements associated with the Recovery Act and SRF program, (2) the uses of Recovery Act funds, and (3) EPA's and states' efforts to oversee the use of these funds. GAO's review of the Clean Water SRF program focused on 14 states and selected localities--known as subrecipients--in each of these states. These 14 states received approximately 50 percent of the total appropriated under the Recovery Act for the Clean Water SRF. GAO obtained data from EPA and the 14 states, including the amounts and types of financial assistance each SRF program provided, which subrecipients were first-time recipients of Clean Water SRF funding, and which projects serve disadvantaged communities. The 14 states we reviewed for the Clean Water SRF program had all projects under contract by the 1-year, February 17, 2010, deadline and also took steps to give priority to projects that were ready to proceed to construction by that same date. Eighty-seven percent of Clean Water SRF projects were under construction within 12 months of enactment of the Recovery Act. In addition, the 14 Clean Water SRFs exceeded the 20 percent green reserve requirement, using 29 percent of SRF funds to provide assistance for projects that met EPA criteria for being "green," such as water or energy efficiency projects; these states also met or exceeded the requirement to use at least 50 percent of Recovery Act funds to provide additional subsidization in the form of, for example, principal forgiveness or grants. SRF officials in most of the states we reviewed said that they faced challenges in meeting Recovery Act requirements, including the increased number of applications needing review and the number of new subrecipients requiring additional support in complying with the SRF program and Recovery Act requirements. States used a variety of techniques to address these concerns to meet the 1-year deadline, such as hiring additional staff to help administer the SRF program. The 14 states we reviewed distributed nearly $2 billion in Recovery Act funds among 890 water projects through their Clean Water SRF program. Overall, these 14 states distributed about 79 percent of their funds as additional subsidization, with most of the remaining funds provided as low- or zero-interest loans that will recycle back into the programs as subrecipients repay their loans. In addition, states we reviewed used at least 40 percent of Clean Water SRF Recovery Act project funds ($787 million) to provide assistance for projects that serve disadvantaged communities, and almost all of this funding was provided in the form of additional subsidization. Almost half of the Clean Water SRF subrecipients had never previously received assistance through that program. Of the 890 projects awarded Recovery Act Clean Water SRF program funds in these states, more than one-third are for green projects, and almost all of these (93 percent) were awarded additional subsidization. EPA has modified its existing oversight of state SRF programs by planning additional performance reviews beyond the annual reviews it already conducts, but these reviews do not include an examination of state subrecipient monitoring procedures. According to EPA officials, EPA has not established new subrecipient monitoring requirements for Recovery Act-funded projects and has given states a high degree of flexibility to operate their SRF programs based on each state's unique needs. Although many states have expanded their existing monitoring procedures, the oversight procedures in some states may not be sufficient given that (1) federal funds awarded to each state under the Recovery Act have increased as compared with average annual awards; (2) all Recovery Act projects had to be under contract within 1 year; and (3) EPA and states had little experience with some new Recovery Act requirements, such as the Buy American requirements. For example, some projects have been completed before any site inspection has occurred.
Credit unions are nonprofit financial cooperatives organized to provide their members with low-cost financial services. According to NCUA, as of 1996, federally insured credit union assets totaled $326 billion. About one in four Americans belongs to a credit union, and credit unions accounted for about 2 percent of the total financial services in the United States. NCUA supervises and insures more than 7,200 federally chartered credit unions and insures member deposits in an additional 4,200 state-chartered credit unions through the National Credit Union Share Insurance Fund. As part of its goal of maintaining the safety and soundness of the credit unions, NCUA is responsible for ensuring credit unions are addressing the Year 2000 problem. The Year 2000 problem is rooted in the way dates are recorded and computed in automated information systems. For the past several decades, systems have typically used two digits to represent the year, such as “97” representing 1997, in order to conserve on electronic data storage and reduce operating costs. With this two-digit format, however, the year 2000 is indistinguishable from 1900, or 2001 from 1901. As a result of this ambiguity, system or application programs that use dates to perform calculations, comparisons, or sorting may generate incorrect results. According to NCUA, most credit unions rely on computers to provide for processing and updating of records and a variety of other functions. As such, the Year 2000 problem poses a serious dilemma for the industry. For example, the problem could lead to numerous problems when calculations requiring the use of dates are performed, such as calculating interest, calculating truth-in-lending or truth-in-savings disclosures, and determining amortization schedules. Moreover, automated teller machines may also assume that all bank cards are expired due to this problem. In addition, errors caused by Year 2000 miscalculations may expose institutions and data centers to financial liability and risk of damage to customer confidence. Other systems important to the day-to-day business of credit unions may be affected as well. For example, telephone systems could shut down as can vaults, security and alarm systems, elevators, and fax machines. In addressing the Year 2000 problem, credit unions must also consider the computer systems that interface with, or connect to, their own systems. These systems may belong to payment system partners, such as wire transfer systems, automated clearing houses, check clearing providers, credit card merchant and issuing systems, automated teller machine networks, electronic data interchange systems, and electronic benefits transfer systems. Because these systems are also vulnerable to the Year 2000 problem, they can introduce and/or propagate errors into credit unions systems. Accordingly, credit unions must develop comprehensive solutions to this problem and prevent unintentional consequences from affecting their systems and the systems of others. To address these Year 2000 challenges, GAO issued its Year 2000 Assessment Guide to help federal agencies plan, manage, and evaluate their efforts. The Office of Management and Budget (OMB), which is responsible for developing the Year 2000 strategy for federal agencies, also issued similar guidance. Both require a structured approach to planning and managing five delineated phases of an effective Year 2000 program. The phases include (1) raising awareness of the problem, (2) assessing the complexity and impact the problem can have on systems, (3) renovating, or correcting, systems, (4) validating, or testing, corrections, and (5) implementing corrected systems. GAO has also identified other dimensions to solving the Year 2000 problem, such as identifying interfaces with outside organizations and their systems and establishing agreements with these organizations specifying how data will be exchanged in the year 2000 and beyond. In addition, GAO and OMB have established a timeline for completing each of the five phases and believe agencies should have completed assessment phase activities last summer and should be well into renovation with the goal of completing this phase by mid to late 1998. Our work at other federal agencies indicates that because the cost of systems failures can be very high, contingency plans must be prepared so that core business functions will continue to be performed even if systems have not been made Year 2000 compliant. NCUA has developed a three-pronged approach for ensuring that credit unions are aggressively addressing the Year 2000 problem, which encompasses (1) incorporating the Year 2000 issue into its examination and supervision program, (2) disseminating information about the problem to credit unions, and (3) assessing Year 2000 compliance on the part of credit union data processing vendors. The first aspect of NCUA’s strategy, the examination and supervision program, involves assessing credit union Year 2000 efforts through regular annual examinations at the 7,200 federally chartered credit unions and 30 to 40 percent of the 4,200 federally insured, state chartered credit unions for which NCUA conducts an insurance review. These examinations seek to identify credit unions that are in danger of not renovating their systems on time and to reach “formal agreements” that specify corrective measures. In conducting these reviews, examiners are to follow NCUA guidelines, which provide step-by-step procedures for identifying problem areas. Once a formal agreement is reached, the examiner is expected to monitor the credit union’s implementation of the agreed-upon corrective measures. Also as part of its examination effort, NCUA has contracted a consulting firm to train selected examiners in Year 2000 efforts. Through this training, NCUA expects to have one in-house Year 2000 specialist available as a resource for every eight examiners. In addition, NCUA’s board recently authorized the hiring of an electronic data processing (EDP) auditor to provide more in-depth technical assistance and education on Year 2000 problems. Another part of NCUA’s examination and supervision strategy includes working with state regulators to ensure that federally insured, state chartered credit unions are also Year 2000 compliant. Officials from NCUA and the National Association of State Credit Union Supervisors told us that all but two state regulators are following the same Year 2000 examination strategy established by NCUA; the other two state regulators are planning on performing added steps in addition to performing those included in NCUA’s strategy. The second aspect of NCUA’s strategy—information dissemination—seeks to heighten credit union awareness of the Year 2000 problem. In August 1996 and June 1997 letters to federally insured credit unions, NCUA formally alerted credit unions to the potential dangers of the Year 2000 problem, identified the specific impacts the problem could have on the industry, provided detailed explanations of the problem, and identified steps needed to correct the problem. It also related its plans to include Year 2000 evaluations in regular examinations and provided credit unions with copies of its examination guidance. In addition, NCUA has appointed a Year 2000 executive responsible for achieving Year 2000 compliance industrywide and assigned Year 2000 compliance officers to its central office and six regional offices. These staff will be responsible for serving as Year 2000 focal points to coordinate efforts across the agency. Finally, NCUA is working with credit union trade groups, such as the Credit Union National Association, in raising awareness of Year 2000 issues. The third component of NCUA’s program—vendor compliance—targets organizations that provide electronic data processing services to credit unions. According to NCUA, approximately 40 vendors provide data processing services to 76 percent of all federally insured credit unions, which account for 79 percent of federally insured credit union assets. Consequently, it is vital that these vendors correct their own systems and help ensure that information can be easily transferred after the Year 2000 deadline. NCUA has begun identifying and contacting major EDP vendors, and it plans to assess their efforts through questionnaires. Specifically, in May 1997 and again in August 1997, NCUA mailed a questionnaire to the 87 vendors, including the 40 vendors that support the bulk of credit unions, requesting information on Year 2000 readiness and, as of September 1997, had received 29 responses. While NCUA has initiated actions to build the Year 2000 issue into examinations and to raise awareness about the issue among credit unions and their vendors, our work to date has identified four issues that must be addressed to provide greater assurance that NCUA efforts will be successful. First and foremost of our concerns is that NCUA still does not have a complete picture of where credit unions and their vendors stand in resolving the Year 2000 problem, and current efforts to determine credit union compliance are behind the schedule established by OMB and GAO. To collect information from the credit unions on their Year 2000 status, NCUA examiners used a high-level questionnaire that inquired whether (1) credit union systems were capable and ready to handle Year 2000 processing, (2) plans were in place to resolve the problem, (3) enough funds were budgeted to correct systems, and (4) responsibility and reporting mechanisms were appropriately established to support the Year 2000 effort. NCUA issued a separate high-level questionnaire to credit union vendors. However, as of the time of our work, NCUA had not yet queried 20 percent of the credit unions and had only received 29 of the 87 vendor responses. In addition, of the credit union and vendor responses received, NCUA has not yet analyzed the information to determine which credit unions and vendors are at high risk of not correcting their systems on time. This problem is compounded by the fact that the NCUA questionnaires did not inquire about the status of efforts in completing each important phase of correction: (1) raising awareness of the problem, (2) assessing the complexity and impact the problem can have on systems, (3) renovating, or correcting, systems, (4) validating, or testing, corrections, and (5) implementing corrected systems. The questionnaires also did not include system interface issues. For example, they did not inquire about (1) identifying interfaces with outside organizations and their systems, such as payment, check clearing, credit card, and benefit transfer systems, and (2) establishing agreements with these organizations specifying how data will be exchanged in the year 2000 and beyond. As a result, even when NCUA assesses the results, it still will not have a complete understanding of how far along the industry is in addressing the problem. In addition, NCUA examinations are conducted only on an annual basis. This means that each credit union will be examined only two more times between the end of 1997 and the year 2000. Further, NCUA has not yet established a formal mechanism for credit unions to submit interim progress reports to provide an up-to-date picture of individual correction efforts between examinations. NCUA officials told us that examiners perform off-site supervision in between exams by tracking performance via credit union financial reports and by contacting credit union officials should a problem arise. However, this may not be enough given the seriousness of the problem and the fact that the Year 2000 deadline is just 2 years away. Further complicating NCUA’s situation is the fact that it is still involved in assessment phase activities. According to OMB and GAO guidance, these activities should have been completed back in the summer. As it stands, NCUA does not plan to complete them until the end of this calendar year. Accordingly, we believe NCUA should accelerate agency efforts to complete the assessment of the state of the industry by no later than November 15, 1997, rather than waiting until the end of the year. NCUA should also collect the necessary information to determine the exact phase of each credit union and vendor in addressing the Year 2000 problem. Because NCUA currently does not have a process in place for interim reporting of this information between examinations, NCUA should require credit unions to report the precise status (phase) of their efforts on at least a quarterly basis. One option would be to use the financial reports, commonly referred to as call reports, that credit unions provide to NCUA quarterly. As part of this report, NCUA should also require credit unions to report on the status of identifying their interfaces to determine whether this issue is being adequately addressed and, if not, require credit unions to implement such agreements as soon as possible. A second concern we have with NCUA’s efforts is that the agency does not yet have a formal contingency plan. Our Year 2000 Assessment Guidecalls on agencies to initiate realistic contingency plans during the assessment phase for critical systems to ensure the continuity of their core business processes. Contingency planning is important because it identifies alternative activities, which may include manual and contract procedures, to be employed should systems fail to meet the Year 2000 deadline. NCUA guidance directs credit unions to conduct contingency planning, and NCUA officials told us that they have developed numerous contingency options and have discussed among the staff what steps to take should a credit union not be compliant by January 1, 2000. However, officials stated that the precise actions have not been documented in a formal plan. Not having this plan increases the risk of unnecessary problems in an already uncertain situation. Consequently, we recommend that NCUA formally document its contingency plans. A third concern that we have is that credit union auditors may not be addressing the Year 2000 problem as part of their work. NCUA requires each credit union to perform supervisory committee audits. These audits are to determine whether management practices and procedures are sufficient to safeguard members’ assets and whether effective internal controls are in place to guard against error, carelessness, and fraud. They are conducted by the credit union’s supervisory committee staff or by an outside accountant. However, NCUA officials noted that such reviews typically focus on general controls (e.g., ensuring accurate data is entered into the system, securing data from unauthorized use) and would not specifically include controls to prevent malfunctions due to the Year 2000 problem. Audits are an integral management control and expanding their scope to include important and high-risk Year 2000 issues is critical since it would provide credit union management with greater assurance and understanding about where their institution stands in addressing the problem. Accordingly, we are recommending to NCUA that it require credit unions to implement the necessary management controls to ensure that these financial institutions have adequately mitigated the risks associated with the Year 2000 problem. Specifically, NCUA should require credit union auditors to include Year 2000 issues within the scope of their management and internal control work and report serious problems and corrective actions to NCUA immediately. To aid credit union auditors in this effort, NCUA should provide the auditors with the procedures developed by NCUA for its examiners to use in assessing Year 2000 compliance and any other guidance that would be instructive. We also believe NCUA should require credit unions to establish processes whereby credit union management would be responsible for certifying Year 2000 readiness by a deadline well before the millennium. Such a certification process should include credit union compliance testing by an independent third party and should allow sufficient time for NCUA to review the results. Our fourth concern is that NCUA does not have enough staff qualified to conduct examination work in complex technical areas. At present, NCUA is the process of hiring one EDP auditor to help examine thousands of credit unions. Recognizing this weakness, NCUA is considering hiring up to three EDP auditors. However, these personnel additions may still not suffice given the tremendous workload and the short time frame for getting it done. To mitigate this concern, we recommend that before the end of the year, NCUA determine the level of technical capability needed to allow for thorough review of credit unions’ Year 2000 efforts and hire or contract for this capability.
Pursuant to a congressional request, GAO reviewed the National Credit Union Administration's (NCUA) progress in making sure that the automated information systems belonging to the thousands of credit unions it oversees have adequately mitigated the risks associated with the year 2000 date change. GAO noted that: (1) NCUA has taken steps to address the Year 2000 problem; (2) these involve incorporating the Year 2000 issue into its examination and supervision program, disseminating information about the problem, and assessing Year 2000 compliance on the part of data processing vendors; (3) concerns exist that must be resolved if the NCUA is to achieve greater certainty that credit unions will meet their Year 2000 deadline; (4) NCUA still does not have a complete picture of where credit unions and their vendors stand in resolving the Year 2000 problem, and current efforts to determine credit union compliance are behind the schedule established by GAO and the Office of Management and Budget (OMB); (5) while NCUA sent questionnaires to credit unions and data processing vendors about the problem, it has not yet queried 20 percent of credit unions and has only received 29 of 87 vendor responses; (6) of the credit union and vendor responses received, NCUA has not yet analyzed this information to identify high-risk credit unions and vendors; (7) further, the surveys did not specifically ask about the status of corrective efforts and whether interface issues were appropriately being addressed; (8) NCUA has directed credit unions to conduct contingency planning and its staff have discussed what steps they should take should a credit union not be compliant by January 1, 2000; (9) however, the agency still lacks a formal contingency plan; (10) NCUA must take prompt action to ensure that these discussions are formally documented so that it will be well-positioned to handle unforeseen problems; (11) as potentially damaging as the Year 2000 problem is, NCUA has not yet ensured that the issue is addressed by credit union auditors; (12) doing so would provide credit union management with a greater assurance and understanding about where their institution stands in addressing the problem; (13) NCUA does not have enough staff qualified to conduct examination work in complex system areas; (14) at present, NCUA is in the process of hiring an electronic data processing (EDP) auditor and is requesting authority to hire 2 more; and (15) these personnel additions may not suffice given the tremendous workload and short time frame for getting it done.
Since the early 2000s, states have been building longitudinal data systems to better address data collection and reporting requirements in federal laws—such as the No Child Left Behind Act of 2001 and the America Creating Opportunities to Meaningfully Promote Excellence in Technology, Education, and Science Act (America COMPETES Act)— and to inform stakeholders about student achievement and school performance. Federal, state, and private entities have provided funding for these systems. For example, in addition to the SLDS and WDQI programs, other recent federal grant programs including Race to the Top and the Race to the Top-Early Learning Challenge may support states’ efforts. The purpose of the SLDS grant program—administered by Education’s Institute for Education Sciences, National Center for Education Statistics—is generally to enable state educational agencies to design, develop, implement, and expand statewide longitudinal data systems to manage, analyze, disaggregate, and use individual student data. From fiscal years 2006 to 2013, Education has awarded approximately $613 million in SLDS grants (see table 1). For each grant competition, Education establishes the award period and range of grant amounts to be awarded; SLDS grants have ranged anywhere from 3 to 5 years, with a maximum award amount of $20 million per grantee. See appendix III for a list of states that received SLDS grants and the amount of their awards. Though the SLDS grant requirements have varied over time, states generally could use SLDS funds to build K-12 longitudinal data systems or to expand these systems to include data from other sectors, such as early education, postsecondary education, or workforce (see table 2). The long-term goal of the program is for states to create comprehensive “P20- W”—early learning through workforce—longitudinal data systems that, among other things, will allow for states, districts, schools, educators, and other stakeholders to make informed decisions and conduct research to improve student academic achievement and close achievement gaps. Under the WDQI grant program—administered by DOL’s Employment and Training Administration—states are expected to fully develop their workforce longitudinal data systems and then be able to match these data with available education data to analyze education and workforce outcomes.has chosen to award WDQI grants to states that have received an SLDS grant or have a longitudinal data system in place. Among other requirements, all grantees are required to develop or improve workforce longitudinal data systems and enable workforce data to be matched with education data to ultimately follow individuals through school and into the workforce. DOL has provided funding for approximately $36 million in WDQI grants to 33 states since fiscal year 2010 (see table 3). The award period for each grant is 3 years. See appendix III for a list of states that received WDQI grants and the amount of their awards. After analyzing data from DQC’s 2013 survey, we determined that over half of grantees have the ability to match data—reliably connect the same record in two or more databases—for some individuals from early As shown in figure 1, individuals can education and into the workforce.take different paths to move from early education into the workforce: (1) via K-12 or (2) via K-12 and postsecondary. Regardless, as the match rate—that is, the percent of unique student records reliably connected between databases—increases, the number of grantees able to match data between sectors decreases. For example, 31 of 48 grantees have the ability to track individuals between all sectors from early education to workforce to at least some degree, but only 6 grantees could do so at the highest match rate. Our analysis of the DQC survey data also shows that more grantees match data among the education sectors than between the education and workforce sectors, though—as was the case with matching data from early education to workforce—the number of grantees that match data For example, 43 decreases as the match rate increases (see table 4).grantees reported matching data between the K-12 and early education sectors, and 31 grantees reported matching data between the K-12 and workforce sectors at least to some degree; however, the number of grantees that reported matching data between these same sectors drops to 37 and 9, respectively, at a match rate of 95 percent or more. Not all grantees are matching data between all sectors, which may partially be the result of receiving grants with different grant requirements. For example, all 20 grantees that received a fiscal year 2009 SLDS ARRA grant were required to have longitudinal data systems that include individual student-level data from preschool through postsecondary education and into the workforce (see table 2). However, fiscal year 2012 grantees could choose from among three different grant priorities, so some grantees may be focused on building a K-12 longitudinal data system while others may be using their grant funds to link existing K-12 data to other sectors. In addition, grantees may have been in different stages of developing their longitudinal data systems prior to receiving a grant, which may help explain why some grantees are able to match data between more sectors than others. Programs Included in the Data Quality Campaign Survey, 2013 Early education: early intervention, Head Start/Early Head Start, special education, state prekindergarten, subsidized child care K-12: elementary and secondary education Postsecondary institutions: less than 2-year public, less than 2-year private not-for-profit, less than 2-year private for-profit, 2-year public, 2-year private not-for-profit, 2-year private for-profit, 4-year and above public, 4- year and above private not-for-profit, 4-year and above private for-profit Workforce: unemployment insurance wage records, unemployment benefits claim data, Workforce Investment Act of 1998 (WIA) adult or dislocated worker program, WIA youth program, adult basic and secondary education, Wagner-Peyser Act employment services, Temporary Assistance for Needy Families (TANF) Of those grantees that match data, we found that few generally do so for all of the possible programs between particular sectors (see sidebar), based on our analysis of DQC survey data (see table 5). For example, only 6 of 31 grantees reported that they were able to match data on all seven programs between the K-12 and workforce sectors, which include unemployment insurance wage records, unemployment benefit claims data, Workforce Investment Act of 1998 (WIA) adult or dislocated worker program, WIA youth program, adult basic and secondary education, Wagner-Peyser Act employment services, and Temporary Assistance for Needy Families (TANF). We also analyzed DQC’s data to determine which programs are most commonly matched by grantees between particular sectors (see fig. 2). See appendix IV for a list of the specific programs matched by each grantee. Most grantees that match data also share data between sectors; that is, they exchange at least one type of data (e.g., demographic, enrollment, program participation, etc.) between two databases in at least one direction, based on our analysis of DQC data. However, in general, few grantees share all possible types of data (see sidebar). For example, only 3 of 36 states that match data between the postsecondary and workforce sectors reported sharing all 10 types of data asked about by DQC, which include information on postsecondary degree completion, earnings and wages, and industry of employment, among others (see table 6). Officials in all five grantee states we spoke with said matching K-12 education and workforce data is challenging without using a Social Security number (SSN) that uniquely identifies an individual and, as a result, some states may have greater difficulty tracking particular groups of students over time. SLDS officials in three states—Ohio, Pennsylvania, and Virginia—said collecting a SSN in K-12 education data is prohibited either by state law or agency policy; in the other two states—South Dakota and Washington—officials said collecting a SSN is optional and whether to do so is determined at the district level.unique statewide student identifier is a technical requirement of the SLDS grant program, states can choose the format of the identifier used. Education suggested, in a November 2010 SLDS Technical Brief, that states use a unique identifier distinct from a student’s SSN for privacy reasons; however, Education also stated that states should maintain a While establishing a student’s SSN as a data element in order to link data between systems. According to a 2010 report from the Social Security Administration’s Office of the Inspector General, 28 states collect a SSN in K-12 education data. Unlike the SLDS program, in its evaluation criteria for WDQI grants, DOL specifies that states use SSNs as a personal identifier, as they are already in use throughout the workforce system. To match education and workforce data absent a SSN, state officials said they are developing algorithms to match individual records using other identifiers, which could include an individual’s first name, last name, and date of birth. However, a person’s last name can change, which Pennsylvania SLDS officials said can make it difficult to reliably track individuals over time. Further, Ohio WDQI officials explained that the absence of a SSN makes it particularly difficult to track students who drop out of high school or to track high school graduates who do not move on to the workforce. Similarly, Ohio SLDS officials said tracking students that do not go on to postsecondary education is a challenge because there is no readily available identifier to determine any workforce participation by those individuals. In four of five grantee states we spoke with, officials also cited data governance as a challenge. Data governance is the exercise of decision- making and authority for data-related matters using agreed-upon rules that describe who can take what actions with what information and when, under what circumstances, and using what methods. SLDS grantees are generally required to develop a governance structure involving both state and local stakeholders that includes a common understanding of data ownership, data management, as well as data confidentiality and access. All WDQI grantees are expected to establish partnerships with relevant workforce agencies and with state education agencies for the purposes of data sharing. Pennsylvania and Ohio officials said it has not been easy to get the various workforce agencies that maintain data on individual workforce programs to share their data as the agencies often operate independently from one another. As a result, Pennsylvania officials said agencies are territorial about their data, making it difficult to build consensus around developing a longitudinal data system. In Ohio, officials said that each agency has to be approached separately to obtain commitment to share data in a longitudinal system. Similarly, officials in Virginia said collecting data on early education programs has been a challenge as the data are scattered across different agencies. An official from the Early Childhood Data Collaborative explained that it can be easier to facilitate data matching between early education programs under the purview of one agency, such as state prekindergarten and special education, which are generally overseen by state educational agencies in addition to K-12 data. Based on our interviews with grantee states, state officials we spoke with said they are in different stages of developing a data governance structure. For example, Pennsylvania WDQI officials said they have not yet established a formal data governance structure. In contrast, Virginia officials have established a data governance structure; officials said they spent 18 months working through the different priorities, cultures, and agendas of the various agencies providing data to the longitudinal data system. State officials in all five grantee states we spoke with also said they have had to manage public concerns about the purpose of data collection or about data privacy. For example, in Ohio, SLDS officials told us there is a lack of understanding about the value of building a longitudinal data system; officials have had to counter misperceptions about what data are being collected in the state’s longitudinal data system, what the data will be used for, and why data need to be connected between the education and workforce sectors. South Dakota officials said they have had to respond to concerns from parents and other education stakeholders about the privacy of longitudinal data. Grantees have tried to provide information to the public about the purposes of the data system and steps taken to safeguard information. Forty-six grantees reported using outreach tools to communicate the availability of the data to non-educator stakeholders, according to our analysis of the DQC survey data. These grantees reported using traditional outreach measures, which could include public service announcements, press conferences and news releases, and posting information about the data on the state education agency’s website. For example, four of five grantee states we interviewed have web pages dedicated to their longitudinal data systems. These web pages can include overviews of the systems, answers to frequently asked questions, trainings on how to use or access the data, and examples of research studies that use the data. Further, 44 grantees reported on the DQC survey that they take advantage of in-person opportunities, which could include meetings, conferences, and presentations. Lastly, 35 grantees reported using electronic or social media to promote the data, which could include Facebook, Twitter, blogs, and webinars. In the context of discussing the challenge of managing public concerns about data collection or privacy, officials in three of the five grantee states we spoke with specifically said they have provided information about how they protect individual data. Pennsylvania SLDS officials said they took considerable time to convey to parents and taxpayers the steps they are taking to ensure data privacy. Similarly, Virginia officials from both grant programs said explaining all of the precautions the state is taking with respect to data privacy seems to help in reducing concerns. Ohio officials said the state’s Department of Education has convened a new workgroup to see if there are better ways to address misperceptions about data collection and use. Lastly, state officials cited the importance of federal funding to their efforts to build their longitudinal data systems and expressed concerns about sustaining their systems after their grants end. Officials we interviewed in all five grantee states said they would not be as far along in developing their longitudinal data systems without the federal funding provided through the SLDS and WDQI programs. For example, officials in Washington said they used their initial SLDS and WDQI grants to focus on building their K-12 data system and workforce systems, respectively. They said the second SLDS grant they received was instrumental in building a P-20W system to connect data between all sectors. Ohio officials said the SLDS funds have provided, among other uses, critical funding for further development of the longitudinal data system, technological updates, and access to technical assistance. However, officials in all five grantee states also expressed concerns about sustaining the systems moving forward. For example, officials in Virginia said they have created a legislative committee to focus on sustainability efforts and will need to request additional funding to keep the system sustainable. Officials in Pennsylvania said they are trying to leverage the existing technical infrastructure and use other available resources, but it is difficult to find funding for their workforce data efforts. According to our analysis of the DQC survey data and our interviews with selected states, SLDS and WDQI grantees use longitudinal data to examine education outcomes and to inform policy decisions. All 48 grantees responded that their state educational agency uses the data to analyze aggregate education outcomes (see fig. 3). For example, the three most common types of analyses are related to high school feedback, cohort graduation or completion, and growth (i.e., changes in the achievement of the same students over time). These aggregate data are used to analyze a particular cohort of students and develop information on students’ outcomes over time. They also help guide school-, district-, and state-level improvement efforts. For example, officials from three of the five grantee states we interviewed told us they have used the data to assess kindergarten readiness for children who attended state early education programs. Also, 27 grantees responded to the DQC survey that they use the data to analyze college and career readiness. More specifically, to better understand the courses and achievement levels that high school graduates need to be successful in college, Virginia followed students who graduated from high school from 2006 to 2008 and analyzed enrollment and academic achievement patterns for different groups of students. According to agency officials in Virginia, this analysis resulted in changes to the course requirements for graduation. In addition to examining education outcomes, states also use longitudinal data to assess how cohorts of students fare once they are in the workforce. Washington’s Education Research and Data Center, a state center dedicated to analyzing education and workforce issues across the P-20W spectrum, has published several studies examining workforce outcomes for high school and college graduates. For example, one study compared earnings for workers with bachelor’s degrees from Washington state colleges and universities to earnings of workers with only diplomas from public high schools. In addition to analyzing aggregate student outcomes, grantees also indicated that they analyze individual-level student outcomes. Our analysis of DQC survey data shows that 45 of 48 grantees examine outcomes for individual students (see fig. 4). Student-level data provide teachers and parents with information they can use to improve student achievement. For example, 32 grantees reported that the data are used in diagnostic analysis, which help teachers identify individual students’ strengths and academic needs. Also, 29 grantees responded to the DQC survey that they produce early warning reports, which identify students who are most likely to be at risk of academic failure or dropping out of school. For example, Virginia’s early warning report shows demographic and enrollment information about an individual student; flags for warning indicators such as attendance, GPA, and suspensions; and a record of interventions the school has taken to help the student (see fig. 5). Further, officials in three of the grantee states we interviewed told us that educators have access to student-level analyses. In Pennsylvania, teachers can use an educator dashboard, which includes longitudinal data, to determine the educational needs of their students and adjust their teaching plans. Forty-one of 48 grantees reported to the DQC that they use longitudinal data to inform policy and continuous improvement efforts. Specifically, grantees reported that they use the data to inform school turnaround efforts (34 grantees), evaluate intervention strategies or programs (14 grantees), or identify and reward schools that demonstrate high growth (27 grantees), among other things. Officials in three of five grantee states we spoke with provided more specific examples of how they use or plan to use longitudinal data to inform their efforts. Ohio officials told us they used longitudinal data to study students in remediation to help develop a remediation policy. They also said they have been working on a workforce success measures dashboard to compare outcomes across state programs. For example, the dashboard will allow policy makers to assess how successful the state’s adult basic education program is compared to the state’s vocational education program. Pennsylvania officials told us they will develop a similar dashboard. Washington state officials told us that longitudinal data helped address a concern in the state legislature about whether math and science teachers were leaving to work in the private sector. Researchers identified common teacher and school district characteristics associated with teachers who left for employment in other fields and found that math and science teachers did not leave the field at a higher rate than other teachers. Officials told us that this analysis prompted the state legislature to focus its attention on improving the recruitment of math and science teachers rather than improving retention. While many grantees reported on the DQC survey that they use longitudinal data to analyze outcomes for students and workers and to make policy decisions, officials from all five grantee states we interviewed told us that these analyses are limited because they are still developing their longitudinal data systems. In addition, only three of these states— Ohio, Virginia, and Washington—are conducting education to workforce analyses. Officials in Pennsylvania and South Dakota said they plan to do this type of analysis, but only after they finish putting all the education and workforce data into their systems and matching these data. Data from the 2013 DQC survey show that 39 SLDS or WDQI grantees have developed research agendas articulating and prioritizing research or policy questions that can be answered with longitudinal data. These research agendas were developed in partnership with higher education institutions, independent researchers, or others. Of the five grantee states we interviewed, only Virginia and Ohio have fully developed their research agendas. Pennsylvania, South Dakota and Washington officials told us they are in the process of doing so. State officials shared two approaches for creating these agendas. Under the first approach, stakeholders from various state agencies comprise a committee that identifies research questions. Virginia took this approach and drafted a list of “burning questions” to answer using longitudinal data. Officials in Virginia explained that they purposefully kept the agenda broad so that the questions will remain relevant over the long term. Washington’s Education Research and Data Center has similarly developed a list of critical questions it would like to answer using longitudinal data. Under the second approach, state agencies use information requests and stakeholder feedback on sample reports to shape the research agenda. For example, officials from the South Dakota Department of Education told us they have solicited feedback after training districts on the data and reviewed requests from the governor’s office and state legislators. They also told us that they are following the number of hits for individual reports on the state’s Department of Education’s electronic portal. Forty-three of 48 grantees reported that they have a process by which researchers who are not employees of the state can propose their own studies for approval, according to the 2013 DQC survey data. Four of the grantee states we interviewed have established a formal request process for researchers who would like to access longitudinal data and the fifth state is reviewing its protocols and expects to develop a formal application process. Officials in two grantee states told us that the request process is intended to streamline access to the data and make it easier for researchers to seek approval for data requests. In addition, officials in Ohio told us that when researchers apply for access to Ohio’s data, they must include information in their application about how the study will meet the state’s research priorities. Since fiscal year 2006, the federal government has made a significant investment—over $640 million in SLDS and WDQI grant funds—to help states build P20-W longitudinal data systems that track individuals from early education and into the workforce. The different grant requirements for linking data between sectors may have contributed to states being in different stages of developing their longitudinal data systems. That is, some grantees are just building their K-12 longitudinal data systems while others are matching data between education and workforce sectors. It remains to be seen whether all grantees will ultimately achieve the long- term goal of developing complete P20-W longitudinal data systems or how long that will take, particularly in light of unresolved concerns about limitations to matching data using a Social Security number and sustainability. Further, even among those grantees that can match data between sectors, most can only do so for a limited number of programs or data types. As grantees continue to refine their systems, maximizing the potential of these systems will rest, in part, with the ability to more fully match information on specific programs and characteristics of individuals that could help in further analyzing education and workforce outcomes. We provided a draft of this report to Education and DOL for their review. Each provided technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution until 30 days after the date of this letter. At that time, we will send copies of this report to the appropriate congressional committees and the Secretaries of Education and Labor. In addition, the report is available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (617) 788-0580 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. The objectives of this report were to examine: (1) the extent to which Statewide Longitudinal Data Systems (SLDS) and Workforce Data Quality Initiative (WDQI) grantees match individual student and worker records and share data between the education and workforce sectors; and (2) how grantees are using longitudinal data to help improve education and workforce outcomes. To answer our objectives, we analyzed state-level data from a 2013 survey conducted by the Data Quality Campaign (DQC), a nonprofit organization that works with state officials and others to support the effective use of data to improve student achievement. DQC’s survey focused on 10 “State Actions” the DQC has developed to ensure effective data use (see table 7). DQC has conducted this annual survey since 2005. The survey data include self-reported information on how data are matched and shared between the early education, K-12, postsecondary education, and workforce sectors, as well as information on specific programs within these sectors, how states analyze and use the data, and who has access to the data. To conduct the survey, DQC used an online tool to collect information and invited the governor’s office in all 50 states and the District of Columbia to participate. According to DQC, the governor’s office is in the best position to bring stakeholders together to respond to the survey. As part of their survey response, states were asked to provide documents or website links as evidence of having specific policies or reports. After survey responses were received, DQC worked with each state to ensure the information reported was as accurate as possible. We analyzed data from eight survey questions (see table 8 in appendix II) to determine the extent to which SLDS and WDQI grantees match individual records and share data among the education sectors and between the education and workforce sectors. For the purposes of our report, a grantee is one of the 48 states that received a SLDS grant, a WDQI grant, or both and responded to the 2013 DQC survey. We considered the District of Columbia to be a state. We excluded Alabama, New Mexico and California from our review because neither Alabama nor New Mexico received a SLDS or a WDQI grant and because California chose not to participate in DQC’s 2013 survey. We excluded the U.S. Virgin Islands and Puerto Rico because, while these territories received SLDS grants, DQC did not include them in its survey. We analyzed data on SLDS and WDQI grantee states because the SLDS and WDQI grant programs provide federal funds for developing longitudinal data systems and are complementary. We considered a grantee as matching data between sectors if a grantee matched data from at least one program between sectors (for a list of programs included in the DQC survey, see questions 1, 4, 7, and 10 in table 8 in appendix II). We considered a grantee as sharing data if a grantee matched data according to our definition and also reported exchanging at least one data element between sectors, in either direction (for a list of data elements, see questions 2, 5, 8, and 11 in table 8 in appendix II). We also analyzed data from another twelve survey questions to identify how grantees are using longitudinal data to help improve education and workforce outcomes (see table 9 in appendix II). We conducted a data reliability assessment by reviewing the survey instrument and related documentation, interviewing officials responsible for administering the survey, and testing the data for obvious inaccuracies. We determined that these data are sufficiently reliable for the purposes of this report. In addition to our analysis of DQC survey data we conducted interviews with a nongeneralizable sample of five grantees as well as relevant federal agencies and nonprofit organizations. During our interviews with the five grantee states—Ohio, Pennsylvania, South Dakota, Virginia, and Washington—we asked grantees to identify challenges they faced in building and implementing longitudinal data systems and discussed how grantees have used longitudinal data to inform decision-making in education and workforce programs. We selected these grantees based on factors including the differing levels of progress they have made in establishing data linkages and the federal funding they have received from the SLDS and WDQI programs. Within each state, we spoke with relevant K-12, workforce, postsecondary education, and early education officials. We also interviewed officials at Education, DOL, and the Department of Health and Human Services to obtain information about their roles in helping states build longitudinal data systems. In addition, we spoke with officials from nonprofit organizations to obtain their views on states’ implementation of longitudinal data systems. These stakeholder organizations included the Early Childhood Education Collaborative, the State Higher Education Executive Officers Association, and the Workforce Data Quality Campaign. Finally, we reviewed relevant federal laws, regulations, requests for applications, and solicitations for grant applications to understand the requirements of these grants. As explained in appendix I, we analyzed data from DQC’s 2013 survey to answer our research objectives. Table 8 and table 9 show the specific questions we analyzed from DQC’s survey instrument. For some questions, DQC allowed states to select “other” as a response; we excluded these “other” responses from our analysis. In addition to the contact named above, Janet Mascia, Assistant Director, Jennifer Gregory, and Nisha R. Hazra made key contributions to this report. Also contributing to this report were Deborah Bland, David Chrisinger, Alex Galuten, Amanda Miller, Jeffrey G. Miller, Mimi Nguyen, Yunsian Tai, and Walter Vance. This glossary is provided for reader convenience. It is not intended as a definitive, comprehensive glossary of related terms. Group statistics (numbers, percentages, averages, etc.) based on individual student data. Reports designed to identify students who are on track for readiness or success in college or careers. The exercise of decision-making and authority for data-related matters using agreed-upon rules that describe who can take what actions with what information and when, under what circumstances, and using what methods. Information on individuals designed to identify each student’s strengths and academic needs. Programs that serve children prior to kindergarten. Programs include: early intervention, Head Start/Early Head Start, state prekindergarten, special education, and subsidized child care. A report designed to identify students who are most likely to be at risk of academic failure or dropping out of school. Information on outcomes for students after they graduate from a school or district. A report that shows changes in the achievement of the same students over time. Elementary and secondary education. Institutions of higher education. Types of institutions include: less than 2- year public, 2-year public, 4-year and above public, less than 2-year private not-for-profit, 2-year private not-for-profit, 4-year and above private not-for-profit, less than 2-year private for-profit, 2-year private for- profit, and 4-year and above private for-profit. A report that shows how students’ success later in the education/workforce pipeline is related to the status of the same students earlier in the pipeline. Reliably connecting the same individual record in two or more databases. The percent of unique individual records reliably connected across databases. Exchanging data between two databases, in either direction. Data elements that could be shared between early education and K-12 include: demographic, family characteristics, program participation, child-level development data; between K-12 and postsecondary: demographic, college readiness assessment scores, college placement assessment scores, high school transcript data, postsecondary enrollment, postsecondary remediation status, postsecondary progress, postsecondary credits earned, postsecondary enrollment intensity, postsecondary outcomes; between K-12 and workforce: demographic, enrollment, transcript data, earnings and wages, employment status, occupation, industry of employment; between post-secondary and workforce: demographic, enrollment, transcript data, financial aid, postsecondary degree completion, earnings and wages, employment status, occupation, industry of employment. Programs that serve individuals in the workforce. Programs include: adult basic and secondary education, TANF, unemployment benefits claims data, unemployment insurance wage records, Wagner-Peyser Act employment, WIA adult or dislocated workers program, and WIA youth program.
From fiscal years 2006 through 2013, the Departments of Education and Labor provided over $640 million in grants to states through the SLDS and WDQI grant programs. These grants support states' efforts to create longitudinal data systems that follow individuals through their education and into the workforce. Analyzing data in these systems may help states improve outcomes for students and workers. GAO was asked to review the status of grantees' longitudinal data systems. This report examines (1) the extent to which SLDS and WDQI grantees match individual student and worker records and share data between the education and workforce sectors and (2) how grantees are using longitudinal data to help improve education and workforce outcomes. To answer these questions, GAO analyzed data from a 2013 survey conducted by the DQC. This survey collected information from states on data linkages among education and workforce programs and on how states use longitudinal data. In addition, GAO interviewed a nongeneralizable sample of five grantees, which were selected based on the progress they have made in matching data and on the funding they have received from the SLDS and WDQI programs. GAO also reviewed relevant federal laws and regulations. GAO is not making recommendations in this report. GAO received technical comments on a draft of this report from the Department of Education and the Department of Labor, and incorporated them as appropriate. Over half of 48 grantee states that received a Statewide Longitudinal Data Systems (SLDS) or Workforce Data Quality Initiative (WDQI) grant have the ability to match data on individuals from early education into the workforce, based on GAO's analysis of 2013 Data Quality Campaign (DQC) survey data. The DQC is a nonprofit organization that supports the effective use of data to improve student achievement. In its survey, DQC collected self-reported information from states on their ability to match, or connect the same individual record, between the (1) K-12 and early education, postsecondary, and workforce sectors and between the (2) postsecondary and workforce sectors. However, as the match rate—that is, the percent of unique individual records reliably connected between databases—increases, the number of grantees able to match data decreases. GAO found that more grantees reported being able to match data among the education sectors than between the education and workforce sectors. Further, most grantees reported that they are not able to match data comprehensively. For example, only 6 of 31 grantees reported that they match K-12 data to all seven possible workforce programs covered by the DQC survey, which include adult basic and secondary education as well as unemployment insurance wage records. State officials cited several challenges to matching data, including state restrictions on the use of a Social Security number. Specifically, officials in three of five grantee states GAO spoke with said state law or agency policy prohibit collecting a Social Security number in K-12 data, which can make it more difficult to directly match individuals' K-12 and workforce records. According to GAO analysis of the DQC survey data, grantees use some longitudinal data to inform policy decisions and to shape research agendas. All 48 grantees reported analyzing aggregate-level data to help guide school-, district-, and state-level improvement efforts. For example, 27 grantees said they analyze data on college and career readiness to help schools determine whether students are on track for success in college or in the workforce. Grantees also reported using longitudinal data to analyze outcomes for individual students. For example, 29 grantees reported that they produce early warning reports that identify students who are most likely to be at risk of academic failure or dropping out of school. Data from the DQC survey also show that 39 grantees reported developing a research agenda in conjunction with their longitudinal data systems.
Several organizations are integrally involved in carrying out the Navy’s financial management and reporting, including: (1) the Office of the Navy’s Assistant Secretary for Financial Management and Comptroller, which has overall financial responsibility, (2) DFAS, which reports to the Department of Defense (DOD) Comptroller and provides accounting and disbursing services, and (3) Navy components, which initiate and authorize financial transactions. To help strengthen financial management, the Chief Financial Officers (CFO) Act of 1990 (Public Law 101-576) required that DOD prepare financial statements for its trust funds, revolving funds, and commercial activities, including those of the Navy. In response to experiences gained under the CFO Act, the Congress concluded that agencywide financial statements contribute to cost-effective improvements in government operations. Accordingly, when the Congress passed the Government Management Reform Act of 1994 (Public Law 103-356), it expanded the CFO Act’s requirement for audited financial statements by requiring that all 24 CFO Act agencies, including DOD, annually prepare and have audited agencywide financial statements, beginning with those for fiscal year 1996. The Government Management Reform Act authorizes the Director of the Office of Management and Budget to identify component organizations of the 24 CFO Act agencies that will also be required to prepare financial statements for their operations and have them audited. Consistent with the act’s legislative history, the Office of Management and Budget has indicated that it will identify the military services as DOD components required to prepare financial statements and have them audited. Therefore, fiscal year 1996 is the first year for which the Navy will be required to prepare servicewide financial statements for its general funds. At September 30, 1994, the Navy’s reported real property account balance was overstated by at least $24.6 billion because DFAS personnel had erroneously double counted $23.9 billion of structures and facilities and $700 million of land. The DFAS, Cleveland Center, personnel compiling these data did not realize that the Center had received some of the same land and building accounting information from two separate sources and had incorrectly included the information from both of them in the consolidated financial reports. To help mitigate situations such as this, in September 1995, the DFAS Director called for the DFAS center directors to take specific steps to increase emphasis on basic internal controls. In November 1995, the DOD Comptroller clarified that DFAS and the Navy are both required to perform quality control reviews of the financial reports and statements. We believe that full and effective implementation of these directives could help to prevent future occurrences of double counting, such as the one noted during our review. For example, if the Navy and DFAS had reviewed reported financial information in that case, they would have found that real property was overstated. The Navy Comptroller Manual, which governs accounting and financial policy for the Navy’s plant property, classifies and lists Navy activities as involving either general fund operations or DBOF operations. The Navy and DFAS, Cleveland Center, did not have effective processes in place to ensure that all financial information on plant property from only general fund activities was included in the Navy’s consolidated financial reports on general fund operations or that plant property from DBOF operations was excluded. To compile consolidated financial reports on the Navy’s general fund operations, a basic control would be to ensure that the reported figures include financial information received from all of the Navy activities identified in the manual as involving general fund operations. However, neither the Navy nor DFAS, Cleveland Center, used the listing as a control to help ensure the accuracy and completeness of the Navy’s fiscal year 1994 consolidated financial reports on general fund operations. Although the Navy Comptroller Manual needs updating, as discussed later, it was the best available information at the time of our review and listed 1,226 general fund activities at September 30, 1994. Our comparison of the list and the information used to compile the Navy’s fiscal year 1994 consolidated financial reports on general fund operations showed that the reports (1) included $34.9 billion for plant property at 936 activities that the manual listed as general fund activities but (2) did not include an indeterminable amount of plant property for the other 290 activities listed in the manual. Also, the financial reports improperly included $1.9 billion in plant property that belonged to 21 Navy activities engaged in DBOF operations. We identified these activities through discussions with Navy and DBOF officials. The activities had mistakenly reported to DFAS that their plant property related to general fund operations, and neither the Navy nor DFAS, Cleveland Center, detected the error. Navy activities engaged in general fund operations report their plant property account balances to either the Defense Accounting Office (DAO)-Norfolk or DAO-San Diego (DFAS now refers to the DAOs as operating locations). These DAOs compile the activity-level data and submit it to DFAS, Cleveland Center, which prepares both financial reports on the Navy’s general fund operations and Navy DBOF financial statements. The DAOs did not compare the listings of reporting activities with those listed in the Navy Comptroller Manual when accumulating the data. Nor did DFAS, Cleveland Center, consult the listings when consolidating the Navy’s fiscal year 1994 financial reports on its general fund operations. Officials from both the Navy Comptroller’s office and DFAS, Cleveland Center, told us that they had not used the listing when the fiscal year 1994 financial reports on the Navy’s general fund operations were prepared because the listing was inaccurate and outdated. Our work verified that the listing was inaccurate and outdated. We found that the reported plant property account balance included $607 million related to 47 general fund activities that were not listed in the manual. Also, the reports included $739 million related to 57 activities that the manual indicated were no longer operating. Updating the manual is the joint responsibility of the Comptroller of the Navy; DFAS, Cleveland Center; and the Naval Industrial Resources Support Activity, which maintains and reports information on government furnished property. According to the Navy and DFAS, because of downsizing and consolidating of activities, updating the manual section on plant property reporting responsibilities was about a year behind schedule. In March 1996, we recommended that the Navy and DFAS require financial information to be reviewed thoroughly to determine its reasonableness, accuracy, and completeness. When implementing this recommendation, an updated Navy Comptroller Manual listing of general fund activities could be used to review the Navy’s financial reports for accuracy and completeness. In concurring with the recommendation to thoroughly review this financial information, the DOD Deputy Chief Financial Officer said that the DOD Comptroller’s November 1995 clarification of the finance and accounting roles and responsibilities of DOD components and DFAS requires a review of reported financial information. Thus, both the Navy and DFAS are now required to verify the accuracy and completeness of financial reports. Also, the September 1995 DFAS Director’s guidance calls for ensuring that component reports of property, equipment, and inventory are promptly submitted and certified as to accuracy. The Navy’s plant property account to control in-transit property and incomplete capital improvements (plant property work-in-progress) had a highly questionable $291 million balance. We found that (1) some Navy and DFAS activities were not properly recording plant property work-in-progress transactions and (2) many Navy activities had difficulty resolving millions of dollars of in-transit property recorded in their plant property work-in-progress accounts. Consequently, these accounts were not useful in providing accurate information to ensure the prompt receipt of in-transit property or monitoring the completion of capital improvements, as intended. The plant property work-in-progress account is designed to temporarily account for both nonmilitary equipment a Navy activity has paid for but not yet received and incomplete capital improvements to existing Navy-owned buildings. The Navy Comptroller Manual specifies that all plant property assets are to be recorded first in a work-in-progress account, with the balance then transferred to a plant property on-hand account within 2 months of in-transit property being received or 6 months of capital improvements being completed. First, we found the following instances where the Navy and DFAS were not properly recording plant property work-in-progress transactions in accordance with the Navy Comptroller Manual’s requirements. The Naval Sea Systems Command and the Naval Air Systems Command miscoded disbursement transactions for nonmilitary equipment purchases by 75 Navy activities. As a result, the disbursements for these assets were recorded as neither plant property work-in-progress nor nonmilitary equipment but erroneously as expenditures for consumable items. The plant property accounting staff at the Naval Submarine Base in Bangor, Washington, stated they were unaware of the requirement to, and thus did not, record incomplete capital improvements to existing buildings in the plant property work-in-progress account. As a result, for example, $290,000 relating to 22 garages being added to on-base housing had not been recorded in the base’s plant property work-in-progress account. DAO-San Diego’s computer system was not programmed to record construction on existing buildings to a Navy activity’s plant property work-in-progress account. Thus, its work-in-progress account balance did not accumulate the correct data for these assets. When situations such as these occur, the Navy’s financial reports are misstated. Further, the failure to properly use plant property work-in-progress accounts essentially circumvents an internal control feature designed to help ensure that nonmilitary equipment in-transit is received and to help monitor completion of capital improvement projects. Second, our analysis of the $291 million plant property work-in-progress reported on the Navy’s fiscal year 1994 consolidated financial reports on general fund operations showed that about 73 percent, or $211.2 million, was related to five Navy activities. In at least the following two cases, the September 30, 1994, reported plant property work-in-progress account balances were questionable. The Naval Intelligence Command reported over $84 million in plant property work-in-progress, which is (1) an increase of more than 2,000 percent from the prior year and (2) inconsistent with the $370,000 account balance it reported for nonmilitary equipment and the $0 balance reported for other real property. The Naval Criminal Investigative Service reported over $30 million in plant property work-in-progress, which is (1) an increase of more than 165 percent over the year before and (2) inconsistent with the Service’s other reported plant property—about $400,000 in nonmilitary equipment. We discussed with officials of these activities the questionable nature of the amounts recorded for these accounts, which could have been identified by comparing year-to-year balances. They confirmed that these account balances were incorrect and said that the activities were attempting to resolve them. Further, our visits at other Navy activities identified additional instances where plant property work-in-progress accounts had grown substantially and resolving the large outstanding balances was a problem. Examples include the following: At the Fleet Combat Training Center-Atlantic, Virginia Beach, Virginia, the plant property work-in-progress account balance had been reported at about $29 million for 2 consecutive fiscal years ending with September 30, 1993, and had increased during the following 6 months to over $62 million. A concerted effort by the Center’s civil engineering staff reduced this amount, but at September 30, 1994, over $34 million remained in the account. At the Tactical Training Group-Atlantic, Virginia Beach, Virginia, the plant property official said that resolving plant property work-in-progress was a problem. For instance, a persistent effort by the Center from November 1991 to September 1993, was necessary to fully resolve $3.5 million in transactions recorded in its plant property work-in-progress account as relating to land and buildings. The group owns no land or buildings and less than $200,000 in nonmilitary equipment. Plant property officials at other Navy activities—including those at the Naval Base in Norfolk, Virginia; the Naval Air Station in Millington, Tennessee; and the U.S. Naval Academy in Annapolis, Maryland—pointed to several factors contributing to problems such as these and making their resolution difficult. They told us, for example, that DAOs assign plant property work-in-progress to Navy activities when payments are made for such items. Quarterly plant property reports to Navy activities from the DAOs show amounts for all types of plant property, including work-in-progress. To identify items to be transferred to a plant property on-hand account, the activities are to match these reports with property received and construction completed. However, the detailed supporting records needed for this comparison, such as the disbursing vouchers the DAOs prepare, are often not available at the activity level. Also, they told us that large plant property work-in-progress account balances can result from data coding errors made by DAO disbursing personnel, causing in-transit property and incomplete construction to be recorded in the wrong activity’s property records. These officials and DFAS accounting personnel said that errors can go undetected, and thus not be resolved, for years because, for instance, (1) they require a significant amount of time to identify and correct and are often given a low priority and (2) property accounting clerks lack training on resolving outstanding transactions. The Navy and DFAS maintain separate logistical, custodial, and accounting records for real property, which comprises more than a reported $17 billion in land, structures, and facilities. We found that information is entered separately into each of these three independently maintained sets of records. They are often not reconciled on a timely basis or, in some instances, never reconciled, resulting in undetected and uncorrected errors and unreliable financial information. The Naval Facilities Engineering Command (NAVFAC) maintains logistical records of real property located at all Navy activities. Because the commanding officer of each Navy activity is accountable for real property under his or her custody, each activity maintains real property custodial records. DFAS, through the DAOs, maintains the Navy’s official real property accounting records. The Navy Comptroller Manual requires Navy activities to quarterly compare their real property custodial records with (1) official Navy accounting records and (2) NAVFAC logistical records. Any errors identified through these reconciliations are to be investigated and corrected. The Navy’s consolidated financial reports on general fund operations at September 30, 1994, included $17.2 billion as the account balance for real property. This information was prepared using the Navy’s official accounting records, which included the real property for 371 Navy activities. However, as of the same date, NAVFAC’s logistical records included information on 406 general fund activities reporting $17.7 billion of real property. To determine the reasons for this difference, we reviewed the real property records at 10 activities that, for fiscal year 1994, had a total difference of $203 million between DFAS records and NAVFAC records. The following illustrates the types of errors identified at these activities. After the Boston Naval Shipyard was closed in the 1970s, NAVFAC removed the balance of the shipyard’s real property accounts. However, DAO-Norfolk officials said they had not been notified of the shipyard’s closing; thus, they had not removed the shipyard’s $52 million in real property from DAO records. According to NAVFAC records, the Naval Training Center in Bainbridge, Maryland, had $37 million in land and buildings on-hand but under sales contract. However, Navy officials told us that this real property was excluded from the Navy’s fiscal year 1994 financial reports because, before the sales contract was executed, DAO-Norfolk erroneously removed the activity from the list of reporting activities. Conversely, NAVFAC’s records included $18.9 million for Bainbridge Training Center buildings that had been demolished. DAO and NAVFAC records were corrected when we advised officials of these errors. At DAO-Great Lakes, where the Navy’s real property accounting records differed from NAVFAC logistic records by $124 million at September 30, 1994, plant property accounting staff did not demonstrate a basic understanding of Navy and DFAS plant property accounting and reconciliation procedures. In one case, for example, the DFAS staff said that a Navy activity did not tell them a difference existed. In another instance, we were told that a DFAS supervisor could not find property records to support an activity’s reported plant property. Rather than contact the activity, the staff stopped reporting the property. Problems such as these are long-standing. In 1989, we recommended that the Navy’s financial records and NAVFAC’s central inventory of real property be reconciled to identify errors and help ensure accuracy. The Naval Audit Service has consistently reported similar problems in its audits of Navy DBOF financial statements under the CFO Act. For example, these audits found that the failure to reconcile Navy DBOF records and NAVFAC records resulted in a $134 million understatement of real property in Navy DBOF fiscal year 1992 financial statements. Differences were found between these records in fiscal years 1991 and 1994 as well. Most recently, in March 1996, we recommended that the Navy and DFAS place a high priority on implementing basic required financial controls, including reconciliations of accounts and records. The DOD Deputy Chief Financial Officer agreed with our recommendation and said that the DOD Comptroller’s November 1995 guidance specifies the roles and responsibilities of DFAS and its customers with respect to reconciliations and resolution of discrepancies. Additionally, the September 1995 DFAS Director’s guidance addresses DFAS’s responsibility for performing reconciliations of account balances. The Navy’s fiscal year 1994 accounting and reporting for plant property were highly unreliable. Accurately reporting the Navy’s plant property account balance is especially important to help ensure the reliability of the consolidated financial statements DOD is statutorily required to prepare, beginning with those for fiscal year 1996. The recommendations we made in March 1996 were directed at avoiding the mistakes made in preparing the Navy’s fiscal year 1994 consolidated financial reports and overarch many of the basic control weaknesses discussed in this report. These weaknesses underscore the need for the Navy and DFAS to fully and effectively implement the improvements that we recommended and that are required by the DOD Comptroller’s and the DFAS Director’s recent guidance. Additional specific actions are also necessary to improve plant property accounting and reporting. We recommend that the Navy Assistant Secretary for Financial Management and Comptroller and the DFAS Director require that by September 30, 1996, the Navy Comptroller Manual provision that lists the Navy’s activities engaged in general fund operations and DBOF operations be updated and accurately maintained; the Navy and DFAS, Cleveland Center, use this listing as one analytical procedure to help ensure that the plant property account balances reported in the Navy’s financial reports are complete and include information from only general fund activities; Navy activities and DFAS routinely monitor plant property work-in-progress accounts and promptly review and resolve large balances; Navy activities promptly request, and DFAS expeditiously provide, information to assist in transferring plant property work-in-progress items to on-hand accounts and in correcting errors; and Navy activities and DFAS personnel be trained to identify and resolve work-in-progress and other plant property problems. In written comments on a draft of this report, DOD generally concurred with our findings and recommendations. DOD said that groups have been established to identify and resolve issues involving the consistency of report information and establish and monitor a plan of action and milestones for improving property reporting and accounting. Also, DOD said that DFAS, Cleveland, has begun a training program for the plant property staff at various DAOs. DOD concurred with each of our recommendations and cited several planned corrective measures. For example, DOD said that improvements will be made to accurately maintain and periodically update information on all Navy activities that own plant property; develop a checklist to identify Navy and Marine Corps activities engaged in general fund operations, which will be used to help ensure that Navy reports provided to DFAS, Cleveland, are complete and include the appropriate general fund reporting activities; reiterate to all DFAS and Navy activities the policy on clearing work-in-progress accounts and ensure that work-in-progress information is promptly reconciled and recorded in DFAS financial records; and train plant property personnel, which has already begun at several DFAS locations. DOD concurred with two of our four findings. DOD partially concurred with two of the findings because it said that references were unclear for two figures cited in our draft report: (1) the 1,226 general fund activities shown in the Navy Comptroller Manual at the time of our review and (2) the $291 million plant property work-in-progress account balance. We provided a DFAS, Cleveland, representative with specific references in the Navy Comptroller Manual and the Navy’s consolidated financial statements for fiscal year 1994 that we used as sources for these data. Also regarding our findings, DOD said that DFAS is emphasizing the need for internal and quality controls, such as identifying Navy and Marine Corps activities engaged in general fund operations. DOD also said that it is the goal of DFAS, the Navy, and the Marine Corps to develop and implement automated and integrated system interfaces for tracking work-in-progress accounts. Further, DOD said that the Navy recognizes that it should have removed property it no longer maintained from Navy records but had failed to do so. DOD said that most of its planned corrective actions will be accomplished within the next year and that many are planned to be completed by September 30, 1996. We believe that DOD’s planned actions will fulfill the intent of our recommendations. Adhering to the projected completion schedule will help to improve the accuracy and completeness of the Navy’s financial statements for general fund operations for fiscal year 1996 and subsequent fiscal years. The full text of DOD’s comments is provided in appendix II. Our work was done as part of a broad-based review of various aspects of the Navy’s financial management operations between August 1993 and February 1996 and was conducted in accordance with generally accepted government auditing standards. Our scope and methodology are discussed in appendix I and the locations where we conducted audit work are listed in appendix III. We are sending copies of this report to the Chairmen and the Ranking Minority Members of the Senate Committee on Governmental Affairs and the House Committee on Government Reform and Oversight, as well as its Subcommittee on Government Management, Information, and Technology. We are also sending copies to the Secretary of Defense, the Secretary of the Treasury, and the Director of the Office of Management and Budget. We will make copies available to others upon request. If you or your staffs have any questions, please contact me at (202) 512-9095. Major contributors to this report are listed in appendix IV. To gain an understanding of the systems and procedures used to account for and report on plant property, we reviewed applicable Navy Comptroller guidance, DOD and DFAS regulations, and instructions promulgated by Navy commands and activities. Also, we interviewed cognizant Navy, DFAS, and Treasury officials and discussed plant property management and reporting with cognizant Navy shore activity officials. To evaluate the DFAS, Cleveland Center’s, process for compiling the Navy’s plant property account balance, we obtained and analyzed the detailed schedules for the fiscal years 1993 and 1994 Navy plant property account balance reported by DFAS, Cleveland Center, and its DAOs. Specifically, we compared the number of Navy activities reporting general fund plant property to those listed in the Navy Comptroller Manual, volume 2, chapter 5; compared the account balance of each reporting activity for the 2 fiscal years to identify trends or fluctuations; and traced the reported account balance to the supporting documentation from the DAOs. We visited NAVFAC, Alexandria, Virginia, its Facilities Support Office in Port Hueneme, California, and its Southwest Engineering Field Division, San Diego, California, to examine how NAVFAC’s central real property database (the Navy Facility Assets Data Base) works and interfaces with Navy activities and DAOs for reporting on land, facilities, and structures. We also visited the Naval Industrial Resources Support Activity in Philadelphia, Pennsylvania, to determine what property it reported to DFAS, Cleveland Center, for inclusion in the Navy’s financial reports. To analyze the amounts reported by Navy for plant property work-in-progress, we obtained the plant property amounts reported for each activity by class—land, buildings, nonmilitary equipment, and work-in-progress. We contacted seven of the activities whose plant property work-in-progress amount appeared to be incorrect when compared with its other reported plant property amounts. At the activities we visited (see appendix III), we examined property accounting procedures and compliance with Navy Comptroller requirements, such as accounting for work-in-progress, reconciliations, and physical inventories. To compare and analyze the account balances and reporting activities among different sources of data that should agree, we obtained the consolidated financial report on general fund operations on real property as reported to DFAS, Cleveland Center, and compared it to NAVFAC’s real property logistics records. For September 30, 1993 and 1994, we compared the detail of the reported account balances of land and facilities provided by DFAS, Cleveland Center, with those in NAVFAC’s records to determine if they agreed. We did not verify the accuracy of the information in NAVFAC’s database because, at the time of our work, the Naval Audit Service was reviewing the reasonableness of the database for estimating costs and savings resulting from base closure and realignment recommendations. In a February 1995 report, The Navy’s Implementation of The 1995 Base Closure and Realignment Process, the Service said that the NAVFAC database was a reasonably accurate source of information for that purpose. We requested comments on a draft of this report from the Secretary of Defense or his designee. The DOD Deputy Chief Financial Officer provided us with written comments, which are discussed in the “Agency Comments and Our Evaluation” section and reprinted in appendix II. The following is GAO’s comment on the Department of Defense letter dated June 14, 1996. 1. A representative of DFAS, Cleveland, contacted us regarding this figure and, on May 16, 1996, we provided additional information as to its source. DFAS, Cleveland, did not indicate that further clarification was necessary. Pat L. Seaton Catherine W. Arnold Julianne Hartman Cutts Karlin I. Richardson Patricia J. Rennie The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO reviewed the Navy's fiscal year (FY) 1994 consolidated financial reports, focusing on the areas contributing to the inaccurate financial reporting of the Navy's plant property account balance. GAO found that: (1) substantial weaknesses in the Navy's financial reporting systems caused the Navy to submit inaccurate FY 1994 financial reports; (2) the Defense Finance and Accounting Service (DFAS) erroneously counted $23.9 billion of structures and facilities and $700 million of land twice because it received the information from two separate sources and incorrectly included the information from both sources in the consolidated reports; (3) the Navy failed to ensure that all plant property from general fund activities was included in or that plant property from Defense Business Operations Fund (DBOF) activities was excluded from the reports because the list of general fund activities was outdated; (4) DFAS did not compare the activities included in the reports with the list of general fund activities when it consolidated the Navy's 1994 financial reports; (5) the Navy's reporting of the $291 million plant property work-in-progress balance was highly questionable because not all transactions were properly recorded, and Navy activities found it difficult to resolve in-transit property transactions; (6) the Navy did not reconcile all of its logistics, custodial, and accounting records on a timely basis; and (7) the Navy and DFAS have taken actions to improve their internal controls, verify the accuracy and completeness of financial information, and reconcile plant property accounts.
Workers with disabilities frequently face special challenges and disincentives when entering or maintaining a place in the workforce. To help those with disabilities overcome these challenges, the federal government has designed a wide variety of programs and incentives. Most of these federal efforts, as described in appendix II, are targeted to persons with disabilities and can include job placement and training programs from state-administered vocational rehabilitation agencies and other service providers as well as extended medical and benefit coverage for Social Security disability beneficiaries to encourage their return to work. Recognizing that businesses may also face some challenges when hiring, retaining, or accommodating individuals with disabilities, the Congress designed some programs and incentives for businesses. These include the three federal tax incentives reviewed in this report as well as several other federal efforts, such as Office of Disability Employment Policy’s (ODEP) Business Leadership Network to link the employers who have jobs to the local agencies who have workers with disabilities to fill these jobs (see table 1). The oldest of the three tax incentives, the barrier removal deduction, was enacted in 1976 to encourage the more rapid modification of business facilities and vehicles to overcome widespread barriers that hampered the involvement of people with disabilities and the elderly in economic, social, and cultural activities. Administered by IRS, it allows taxpayers to claim expenses for the removal of eligible barriers as a current deduction rather than as a capital expenditure that is gradually deducted over the useful life of the asset. Internal Revenue Code and corresponding regulations delineate the specific types of architectural modifications that are eligible, such as providing an accessible parking space or bathroom. In 1990, legislation reduced the maximum amount of the barrier removal deduction from $35,000 to $15,000 and created the disabled access credit. The disabled access credit may be taken for expenditures made by eligible small businesses to comply with the requirements of the Americans With Disabilities Act of 1990. The credit defines small businesses as having no more than (1) $1 million in gross receipts or (2) 30 full-time employees. The credit is equal to 50 percent of eligible expenditures made during the year, not including the first $250 and excluding costs over $10,250, resulting in a maximum yearly credit of $5,000. Along with their responsibility to enforce the ADA, the Equal Employment Opportunity Commission (EEOC) and the Department of Justice (DOJ) provide information and promote the use of the disabled access credit and other related tax incentives. In addition to these incentives for accommodation, the work opportunity credit provides businesses of any size with a hiring incentive for employing economically disadvantaged individuals, including those with disabilities. Established with the enactment of the Small Business Job Protection Act of 1996 (P.L. 104-188), the Work Opportunity Tax Credit Program provides employers with an incentive to provide jobs and training to economically disadvantaged individuals, many of whom are underskilled and undereducated. Of the nine eligibility categories of disadvantaged workers, two categories specifically include workers with disabilities–the vocational rehabilitation referrals and Supplemental Security Income recipients. The method for determining the amount of work opportunity credit to be claimed has two tiers: (1) for newly hired eligible employees working at least 400 hours, the credit is 40 percent of the first $6,000 in wages paid during the first year of employment, for a maximum amount of $2,400 for each employee and (2) for eligible workers with 120 to 399 hours on the job, a lesser credit rate of 25 percent is allowed. No credit is available for eligible workers who do not remain employed for at least 120 hours. Federal and state agencies share responsibility for administering the work opportunity credit. The IRS is responsible for the tax provisions of the credit. The Department of Labor (DOL), through the Employment and Training Administration (ETA), is responsible for overseeing the administration and promotion of the program. DOL awards grants to states to determine and certify workers’ eligibility and to promote the program. As part of the certification process, for each new person hired, employers must submit two forms to the state employment agency within 21 days of the hiring. For a fee, consultant businesses can assist the hiring business with the program’s administrative requirements. Employers must also determine the appropriate amount of credit to claim and maintain sufficient documentation to support their claim. In 1999, a small proportion of corporate taxpayers or individual taxpayers with a business affiliation reported the work opportunity credit and the disabled access credit on their tax returns. Whereas taxpayers in the retail and service industries accounted for most of the dollar amount of work opportunity credits, those providing health care and other social assistance services accounted for most of the dollar amount of the disabled access credits. Although we can provide information on the credits’ use and characteristics of users, we cannot determine the amount of credits used to hire, retain, and accommodate workers with disabilities. This information is not available from tax data because tax returns provide only the total amount of credits reported, and employers can also claim the work opportunity credit for employing other types of workers and claim the disabled access credit for expenditures made to accommodate customers with disabilities. Moreover, information is not readily available regarding the usage of the barrier removal deduction for providing transportation or architectural accommodations because IRS’s databases commingle this deduction with other deductions. In 1999, a small proportion of taxpayers reported the work opportunity credit on their tax returns. In that year, about 1 out of 790 corporationsand 1 out of 3,450 individuals with a business affiliation reported this credit. Corporations, excluding those that pass their credits through to individual shareholders, accounted for an estimated 87 percent ($222 million of $254 million) of the total work opportunity credits reported for 1999. These corporations also had an estimated average credit of about $106,000, an amount more than 25 times greater than the estimated average credit for individual taxpayers. Table 2 shows the estimated amount of work opportunity credits reported for 1999. Corporate credits reported were concentrated in a few industries.Corporations in retail trade, hotel and food services, and nonfinancial services accounted for an estimated $170 million, or about three-quarters of corporate work opportunity credits in that year. Interviews with those knowledgeable about this credit, including federal and state government officials, told us that retail and service businesses participate in this program because they have high employee turnover and need a large number of the low-skilled workers that this program targets. Table 3 provides an industry distribution of the estimated amount of work opportunity credits reported by corporations for 1999. Furthermore, large corporations, those with $1 billion or more in total receipts, accounted for most of the work opportunity credits. These large corporations accounted for an estimated $177 million, or about 80 percent of corporate credits for 1999. Interviews with those knowledgeable about this credit, including federal and state government officials, told us that these larger businesses are more likely to know about and use this credit because their large hiring needs make it financially beneficial to learn about and develop procedures to use the credit. Data support this view, as the estimated average credit for corporations with $1 billion or more in total receipts was about $540,000. Those interviewed also noted that larger corporations are more likely to have the needed human resources to manage the administrative requirements of this program or they can, for a fee, use consultants to meet these requirements. Table 4 shows the estimated distribution, by total receipts, of work opportunity credits reported by corporations for 1999. Although we can provide estimates on the amount reported for the work opportunity credit, we cannot accurately determine the amount of credits associated with hiring and employing workers with disabilities. This amount cannot be precisely determined because tax returns only include the total amount of the credit reported for all disadvantaged workers eligible for the credit. In 1999, a small proportion of taxpayers reported the disabled access credit on their tax returns, and the dollar amount of credits reported were concentrated in the health care and other social assistance services. In that year, about 1 out of 686 corporations and 1 out of 1,570 individuals with a business affiliation reported this credit. Most of the disabled access credits were reported by individual taxpayers with a business affiliation ($51 million of the total $59 million reported). Furthermore, providers of health care and other social assistance services accounted for an estimated $31 million, or approximately half of all the disabled access credits reported for 1999. However, it is not possible to determine if these credits were for accommodations to benefit their employees or clients because credits can be reported for either purpose, and tax returns include only the total amount reported. It is also not possible to determine the total number of taxpayers whose businesses met the credit’s small business eligibility requirements. Table 5 shows the estimated amount of disabled access credits reported for 1999. Little information is available regarding the effectiveness of the incentives in encouraging employers to hire, retain, or accommodate workers with disabilities. Of the three incentives, only the work opportunity credit has been the subject of specific study. The two studies we identified showed that some employers participating in the program modified their recruitment, hiring, and training practices to increase their hiring and retention of disadvantaged workers. However, one of these studies, as well as some studies of a similar hiring credit that preceded the work opportunity credit, indicate that such credits can reward employers for hiring disadvantaged workers they would have hired anyway. We were unable to identify any studies that directly examined the effectiveness of the disabled access credit and barrier removal deduction. However, discussions with those knowledgeable about these incentives, including government officials, academic experts, and business representatives, and some general studies of employers’ perspectives on various disability employment issues provided some additional information about the awareness, usage, or effectiveness of the incentives. For example, they indicated that businesses were frequently unaware of the incentives. While the studies, surveys, and opinions provide some information about the incentives’ effectiveness, limitations in the research methods used, and a lack of required data for further assessment preclude a conclusive determination of how effective the three tax incentives are in increasing the employment of workers with disabilities. One of the WOTC studies, conducted by GAO, included a survey of 225 employers participating in the WOTC program in California and Texas in 1999 and in 1997 or 1998 and found that most of the employers participating in the WOTC program reported changing their recruitment, hiring, or training practices to secure the credit and to better prepare the credit-eligible new hires. Frequently, reported changes to recruitment involved employers listing job openings with a public agency or a partnership (48.8 percent), asking other organizations to refer job applicants (42.6 percent), partnering with agencies to identify applicants (33.8 percent) or to screen them (29.1 percent). These changes may have helped employers to increase their pool of WOTC-eligible applicants and may thereby have increased their chances of hiring these workers. About one-half of these employers also reported training practices that may have increased the retention of WOTC-eligible hires, such as providing mentors or work readiness training and lengthening training times. On the other hand, the report found that 57 percent of employers surveyed said that the possibility that an applicant might make the company eligible for the tax credit would not affect the applicant’s chance of being hired. The other study, commissioned by DOL, involved in-depth interviews with a judgmental selection of 16 businesses that used the WOTC and the Welfare-to-Work Tax Credit. Most, but not all, of these employers indicated that these tax credits played little or no role in their recruitment policies or that the individuals hired from either of the credit’s target groups would have been hired in the absence of the tax credits. Even in those cases where the tax credit played a role in the hiring decision, employers indicated that it was one among several factors considered, such as the applicant’s experience and skills. Interviews with those knowledgeable about the work opportunity credit provided some additional information about the effectiveness of this credit. Some businesses and business groups we interviewed indicated that the credit may motivate certain employers, such as large businesses hiring many low-skilled workers, as well as some smaller businesses, to hire disadvantaged workers because it can lower their labor costs. However, some of the other businesses we interviewed told us that the work opportunity credit had marginal, if any, impact on their hiring, because they based their hiring decisions on other factors, such as the skills and abilities of job applicants, or because they viewed workers with disabilities as valuable employees and wanted to have a workforce that reflected their customer base. Furthermore, government officials and academic experts told us that the usage of this hiring credit is limited by a lack of knowledge of the credit in the business community, its low dollar value per worker hired, and administrative requirements. They also noted that because eligibility is limited to persons with disabilities receiving publicly funded vocational rehabilitation or SSI benefits, a number of other people with disabilities cannot participate. For example, individuals receiving Social Security Disability Insurance or privately funded vocational rehabilitation are not eligible to participate in the program. Studies of a similar tax incentive to encourage employers to hire disadvantaged individuals also provide information about the potential effectiveness of WOTC. Studies of the Targeted Jobs Tax Credit, the precursor to WOTC, showed that it increased hiring and earnings of the eligible workers; however, it also provided credits to employers for hiring workers who would have been hired in the absence of these incentives.These studies indicate that from 50 to 92 percent of the credits claimed were for workers employers would have hired anyway. Studies of the targeted jobs tax credit also found that employers rarely took the actions needed to claim the credit when hiring individuals from eligible target groups, but that proactive government outreach, such as referral of a disadvantaged client to a business, could significantly increase employer participation in the credit program. Although similar to its precursor, several administrative changes were made to WOTC in an attempt to make it less susceptible to providing credits to employers for workers they would have hired anyway; however, the specific effect of these changes is not known. In addition, we found two national surveys examining various disability employment issues that provide some information about employers’ awareness and perceptions of the effectiveness of tax incentives in general. One of the national surveys assessed employers’ experiences with workers with disabilities and found that only 15 percent of the 255 supervisors of workers with disabilities were aware of employer tax incentives. The other national survey assessed employment policies and found that private human resource managers viewed employer tax incentives as the least effective means for reducing barriers to employment for people with disabilities. By order of importance, the more than 800 private human resource managers surveyed viewed visible top- management commitment, staff training, mentoring, on-site consultation and technical assistance, and short-term outside assistance as more important than tax incentives in reducing employment barriers for workers with disabilities. In contrast to the work opportunity credit, we were unable to identify any studies that directly examined the effectiveness of the disabled access credit and barrier removal deduction. However, some of those we interviewed provided additional information on the perceived effectiveness and use of the disabled access credit and barrier removal deduction. Many of the business representatives and others we spoke with were either unaware of these incentives or did not have an opinion about their effectiveness. Of those with an opinion, the barrier removal deduction was viewed by more individuals as having a positive effect on the employment of workers with disabilities than was the disabled access credit. While both incentives can help offset the cost of accommodating workers with disabilities, they believed that the barrier removal deduction was more widely used because larger businesses, that are more likely to be aware of and willing to use tax incentives, are eligible for this incentive. However, they also pointed out that the use of the deduction was limited because it only allows specific types of architectural and transportation modifications. Implemented more than 20 years ago, the deduction cannot be applied to the cost of addressing communication and electronic barriers in today’s modern workplace. Finally, in addition to the business size restriction, they mentioned that the unfamiliarity with the disabled access credit or not clearly understanding the expenditures that qualify, could limit its usage. While the studies, surveys, and opinions from those knowledgeable about the tax incentives provide some insight about their effectiveness, limitations in the studies’ research methods do not allow for directly measuring the effectiveness of the incentives. For example, the WOTC studies are limited in that they did not measure (1) the extent to which employers would have made these hires in the absence of the incentive; (2) the effect of the incentive on the retention and salaries of WOTC hires compared to similar employees who were not certified for the program; or (3) the effect of the incentive on SSI recipients and vocational rehabilitation referrals, who are represented in two eligibility categories for the work opportunity credit. Existing data limitations preclude a conclusive determination of how effective the three tax incentives are in increasing the employment of workers with disabilities. The tax credits and the deduction create incentives to increase the employment of workers with disabilities by reducing the costs of employing these workers. To determine the incentives’ effect on the employment of these workers, information is needed on the extent to which the incentives reduce employers’ costs (by decreasing their tax liability) and the extent to which these reduced costs result in the employment of more workers with disabilities. However, the national databases lack the data needed to make this determination. As previously discussed, IRS databases do not provide information on the barrier removal deduction. And, while these databases provide information to estimate the usage of the disabled access credit and the work opportunity credit, they do not provide information on the amount of credits specifically associated with workers with disabilities. In addition, although DOL has a national database for the work opportunity tax credit program, this database does not contain the information needed to accurately determine the amount of credits associated with workers with disabilities. Furthermore, economic literature does not provide a consensus on the extent to which employers would alter their employment of workers with disabilities in response to reductions in costs. Without this information, a conclusive determination of the three incentives’ effectiveness cannot be made. In addition, surveying employers to determine the extent to which tax incentives caused them to hire or accommodate employees with disabilities may provide wide variations in the results depending upon the research methods used and the quality of the data obtained. Studies that specifically ask an employer whether a tax incentive caused them to hire or accommodate an eligible individual can understate the effect of the incentive, because employers may respond negatively if they do not want to appear to discriminate in their employment practices or because eligibility for the incentive would not be the only or even major factor that employers consider when making such decisions. On the other hand, asking a more general question, such as whether the incentives had some influence on their employment practices, lacks precision and may lead to overestimating the effect of the incentives. Business representatives and experts on disability issues and tax incentives suggested options for increasing the usage and effect of existing employer tax incentives. Many of those we interviewed suggested increasing and improving government outreach and education efforts, including improvements to government coordination and clarification of tax incentive requirements. To further increase the use and effect of the incentives, they also suggested increasing the dollar value of the incentives and expanding the types of workers, businesses, and accommodations that qualify a business to receive the credits or deduction. Although changing the existing tax incentives presents the potential for increased usage and a reduction in tax revenues, such changes give no assurance of a substantial improvement in the employment of workers with disabilities. Interviews with business representatives and experts in disability issues indicate that two primary obstacles to increasing the use of the tax incentives are a lack of familiarity with the incentives and perceptions regarding the amount of effort required to qualify for them. A number of those we interviewed suggested that better coordination of government efforts, clarification of tax incentive provisions, and increased outreach and education could help to improve this situation. The most frequently cited reason by business, academic, and disability representatives for infrequent use of the incentives was that businesses were not aware of them. Among the three tax incentives we examined, most businesses and other organizations contacted were familiar with the work opportunity credit; however, our contacts, especially business representatives, were far less familiar with the disabled access credit and the barrier removal deduction. Several of those interviewed indicated that smaller businesses were less likely to have staff who were familiar with the credits than larger businesses. Furthermore, while larger businesses may have tax staff who are familiar with the incentives, this knowledge is not always shared with the hiring and other human resource managers. Without a general awareness of these tax credits and deduction, employers cannot factor them into the hiring, accommodation, or retention decisions, which may be influenced by concerns about the potential costs of employing individuals with disabilities, such as the possible costs for accommodation or increased workers’ compensation and medical insurance. Another obstacle to the use of the incentives, according to many of those we interviewed, was the perception that qualifying for the incentives would require burdensome paperwork and other efforts. To claim an incentive, businesses must gain knowledge of the eligibility requirements, record the amount claimed on the appropriate tax form, and maintain documentation to support their claim. The process may be particularly burdensome for the work opportunity credit. To claim the work opportunity credit, a business must also complete and provide two forms within 21 days to the state employment agency, which certifies the eligibility of a new hire for this program. According to some familiar with this credit, these extra requirements can create a burdensome paperwork process, especially for smaller businesses that may lack sufficient resources to meet these requirements. Even those businesses that have sufficient resources may not believe that the credit is worth the time and effort needed to qualify for it, according to several business representatives. For a fee, some businesses use consultants to help reduce this burden. Furthermore, the IRS has a demonstration project to enable businesses to electronically file the certification forms and, as of April 2002, authorizes state employment agencies to accept electronic submission of one of the certification forms. Also, proposed legislation, recently passed by the House, is intended to simplify the eligibility requirements for this credit. Given the general lack of familiarity with the disabled access credit and the barrier removal deduction, views about the burdens created by these incentives may be partially based on misperceptions among businesses and others we interviewed. Unlike the work opportunity credit, these incentives do not require any additional paperwork beyond claiming the credit or deduction on IRS tax forms. Accordingly, one vocational rehabilitation official told us that businesses’ perceptions about the burden of these incentives was a “myth” and not based on their actual experiences. However, to some extent, the burden may be related to determining eligibility for incentives, especially for the disabled access credit. Academic experts told us that a lack of clarity as to the type of businesses and expenditures that are eligible for the disabled access credit makes it more difficult for them to use the credit. To increase familiarity and reduce possible misperceptions concerning the incentives, representatives from businesses, academia, government agencies, and disability organizations told us that there is a need for better coordination in promoting the appropriate use of the incentives and the advantages of hiring workers with disabilities. Most of those interviewed believed that the federal government’s efforts to inform and educate taxpayers about these incentives should increase. A variety of suggestions were offered on how the government should proceed with these outreach efforts, and which agency should lead these efforts, given the multiplicity of agencies with responsibility for encouraging the employment of individuals with disabilities. Some business, academic, and disability representatives we interviewed believed that the Department of Labor, specifically the Office of Disability Employment Policy, should have lead responsibility for promoting these three incentives. According to one businessperson, ODEP should take the lead because promoting the incentives is about promoting business and hiring of competent workers. Some of those we interviewed also viewed the participation of all federal, state, and local agencies associated with the employment of people with disabilities in outreach efforts as essential. Some representatives also emphasized that federal agencies should partner with the private sector in promoting the use of these incentives. Federal outreach efforts were viewed as being more likely to be effective if they utilized business organizations as well as disability advocacy organizations, local agencies, and nonprofits to promote these incentives. According to a representative of thousands of small businesses, increased publicity through disability advocacy groups and the tax preparer industry would make small businesses more aware of the available incentives. Outreach efforts by federal government agencies have been limited, but they appear to be increasing. For example, IRS, DOL, DOJ, and EEOC use their Web sites and toll-free numbers to give individuals access to information on the incentives and have recently begun more active outreach. In addition, DOJ officials told us that they had been coordinating their outreach efforts with other agencies. In coordination with the Small Business Administration, DOJ developed an ADA guide for small businesses that addresses the tax incentives. DOJ officials also told us that, for each year since 1994, they had included a flier or an article with information on ADA requirements and available tax incentives along with routine SSA and IRS mailings to businesses and/or their accountants. SSA also has several efforts to provide information about tax incentives to employers and individuals with disabilities. Information about the incentives is available on its Web site and through printed materials widely distributed to employers and disability beneficiaries. As part of SSA’s Ticket to Work Program, the private employment service providers and public vocational rehabilitation agencies offer employers information about their eligibility for tax incentives and assistance in qualifying for these credits, according to SSA. IRS has also recently made efforts to reach out to taxpayers by including an article on the disabled access credit in the IRS Reporter—an IRS publication for taxpayers and tax preparers. Furthermore, as part of the President’s New Freedom Initiative to ensure enforcement of the ADA, DOJ is mailing to selected small businesses a packet of information on tax incentives to encourage the accommodation of customers and employees with disabilities. This outreach effort to the business community was undertaken in response to a general belief that many small businesses were not aware of the tax incentives available to them, particularly the disabled access credit. Other efforts under the President’s initiative include a series of workshops initiated by the EEOC to provide information to small businesses about the benefits of hiring people with disabilities, including information about the tax incentives. The EEOC is partnering with DOJ to conduct some of the workshops. In addition, EEOC recently released a guide for businesses that includes information about the tax incentives entitled The Americans with Disabilities Act: A Primer for Small Businesses. Improved coordination and outreach were also suggested to help resolve a reported concern about the appropriate use of the disabled access credit. According to some academic experts, unclear guidance, including a lack of IRS implementing regulations for the disabled access credit, can inhibit its use. It was explained that some companies may not use the incentives, in part, because they are wary of being audited by IRS and later being found to have used the credit incorrectly. According to a representative of a large tax preparer group, the disabled access credit’s provisions are unclear and complicated. For example, IRS guidelines do not clearly state whether a business that is not required by title I of the ADA to accommodate an employee can use the credit for these expenditures. Many of the organizations that we contacted told us that increasing the maximum dollar amount allowed to be claimed for the incentives might increase usage by attracting the attention of businesses and changing perceptions that the administrative cost of using the incentives will outweigh their benefits. Some academic and business representatives said that they believed that the incentives would need to increase—with some suggesting increases of 25 to 200 percent—to capture the attention of businesses or reduce their concerns about the cost of accommodating workers with disabilities. Although the cost of accommodating a worker with a disability is often less than $500, sometimes these costs can exceed the amount allowed under the tax incentives. For example, some government, disability, and academic representatives told us that the cost of some accommodations, such as those for information technology to accommodate a person that is visually impaired, can sometimes far exceed the maximum $5,000 per year for each eligible business allowed under the disabled access credit. In addition, companies that employ a large number of disabled workers may also incur substantial accommodation costs. For example, one of the large companies we interviewed reported spending more than $1 million on accommodations in the last year, although this official believed that the talent they received more than compensated for these costs. Most of the organizations interviewed favored an expansion of the eligibility requirements of the tax incentives as a means to increase their usage. According to interviewees, use of the incentives is limited by the following restrictions: the type of workers eligible for the work opportunity credit, the size of businesses for the disabled access credit, and the type of accommodations for the barrier removal deduction. Most interviewees favored expanding coverage of the work opportunity credit to include a broader spectrum of workers with disabilities, as eligibility requirements currently limit eligibility for workers with disabilities to certain vocational rehabilitation referrals or Supplemental Security Income recipients. Many suggested including Social Security Disability Insurance recipients as an additional category of eligible workers for this program even though some of these individuals may not be economically disadvantaged—generally a criterion for inclusion in this program. Inclusion of this group would complement SSA’s Ticket to Work program to encourage individuals with disabilities who are receiving disability benefits to return to work. Pending legislation, passed by the House, includes a provision to expand eligibility to those Social Security Disability Insurance recipients who are working with employment networks and have individualized work plans under the Ticket to Work program. Many business representatives would also like to see the disabled access credit expanded to make more businesses eligible for the credit. The tax code limits the usage of this credit to businesses that are making accommodations in compliance with the ADA and have either (1) 30 employees or less or (2) $1.0 million or less in gross receipts. Many believed that the restriction on employees should be expanded to include businesses with over 30 employees. In addition, academic experts pointed out that by tying the use of the credit to compliance with the ADA that many of the smallest firms, that is those with fewer than 15 employees, may not be able to use this credit when accommodating an employee. While the ADA generally requires small businesses to remove architectural barriers, it does not require businesses with fewer than 15 employees to make such modifications for their employees. According to representatives of a business organization representing many small companies, ensuring that the incentives are available to small business to accommodate employees is particularly important because these businesses account for most of the growth in jobs. According to the Small Business Administration, small firms constituted about three-quarters of the employment growth in the 1990s. The vast majority of business, academic, government, and disability representatives interviewed told us that the barrier removal deduction should be expanded to include accommodations to address electronic and communications barriers in the workplace. Although new technologies can open up opportunities for people with disabilities to more actively participate in the workforce, some new technologies can also act as barriers for those with sensory and other types of impairments and can prevent them from fully participating in the modern workplace. For example, an individual with a visual impairment may not be able to use a computer without a screen reader or other special software to interpret images on the monitor. Many of those we interviewed believed that various changes could increase the usage of the incentives to improve the employment of workers with disabilities; however, tax revenue reductions are a likely result from such changes. Tax revenues would be expected to decrease if the dollar value of the incentives was increased and/or coverage was expanded to include more people with disabilities, businesses, or types of accommodation. Potential reductions in tax revenues could be offset to some extent by an increase in taxable income and reduced government benefits for workers with disabilities if changing the incentives were to improve the employment of workers with disabilities. However, because of the lack of data on the effectiveness of the incentives, potential tax revenue losses would have to be absorbed without knowing the effect of changes to the incentives on the employment of people with disabilities. Increasing the dollar amount allowed for these incentives may also increase the potential for misuse and thereby reduce tax revenues. There are already indications that at least one of the incentives, the disabled access credit, has been targeted for fraudulent activity. In April 2002, the Treasury Inspector General for Tax Administration testified that, in tax year 1999, thousands of taxpayers may have inappropriately claimed the disabled access credit, including taxpayers who did not indicate any interest in or ownership of a business on their tax return—a key requirement for receiving the credit. Increasing the value of this and other tax incentives may make them even more attractive to those who may misuse them. Another point to consider with increasing the maximum dollar amount for the incentives is that this change would allow those who are already claiming the incentive to claim an additional amount without increasing the employment or accommodation of workers with disabilities. For example, businesses that already claim the work opportunity credit, could, if the credit were increased, simply claim more for each eligible worker without making any changes in the overall number of workers they hired or the level of accommodation provided. In addition, because the disabled access credit is tied to compliance with the ADA, increasing the maximum dollar amount for the incentive may not increase the level of accommodation provided, in that employers are already required by law to provide reasonable accommodations. Finally, increasing outreach, eligibility, or the maximum dollar amount allowed to be claimed for the incentives may increase their usage; however, it is not known whether the costs of such changes would be offset by improvements in the employment and accommodation of workers with disabilities. We provided a draft of this report to the Department of Education, the Department of Justice, the Department of Labor, the Internal Revenue Service within the Department of the Treasury, the Equal Employment Opportunity Commission, and the Social Security Administration. They generally concurred with our findings. The comments from most of the agencies were limited to technical comments and were incorporated, as appropriate, into the report. In addition to technical comments, SSA provided us with several general comments. In response to one of these comments, we included additional information about workers’ eligibility for the work opportunity credit. SSA also commented that disability groups believe that the current structure of WOTC may be causing a revolving door effect in which employers hire individuals for low-pay and unskilled work and retain them only as long as the employers receive the tax credit. However, in our discussions with a wide range of disability groups, none indicated that the program created a revolving door for WOTC-eligible hires. Moreover, a recent GAO review of the credit found that employers did not appear to be dismissing employees to increase their tax credit. In addition, SSA’s general comments indicated that more attention should be directed at measuring the employers’ awareness and understanding of the three tax incentives, the results of which could, among other things, improve outreach and education. Although further study may provide some additional information on changes to outreach that could increase the incentives’ usage, existing data limitations would still preclude determining the effectiveness of these changes on the employment of people with disabilities. The full texts of SSA’s and IRS’s comments are included as appendices III and IV. We are sending copies of this report to the Department of Education, the Department of Justice, the Department of Labor, the Internal Revenue Service within the Department of the Treasury, the Equal Employment Opportunity Commission, the Social Security Administration, appropriate congressional committees, and other interested parties. We will also make copies available to others on request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions concerning this report, please call me or Carol Dawn Petersen, Assistant Director, at (202) 512-7215. Staff acknowledgments are listed in appendix V. To obtain information on the usage of the two tax credits, we analyzed tax data from the Internal Revenue Service’s (IRS) Statistics of Income Programs for 1999, the most recent year that data were available. Statistics compiled for the Statistics of Income (SOI) programs are generally based on stratified probability samples of income tax returns or other forms filed with the IRS. The two SOI programs used were the 1999 Corporation Income Tax Returns Program and the 1999 Individual Income Tax Return Program. The Corporation program includes information on active, for-profit corporations, including information on S corporations. S corporations report items of income, deduction, loss, and credit on their corporate tax returns, but pass through such items to individual shareholders. Throughout the report, we provided information on the number and characteristics of corporations reporting the credits. However, we excluded the amount of credits associated with S corporations because these credits can be passed through to individual shareholders and reported on individual tax returns. For individual tax returns, we differentiated between individuals with and without a business affiliation, as the credits are for businesses that hire disadvantaged employees or accommodate employees or customers with disabilities. Individual taxpayers with a business affiliation are those whose individual tax returns show they had a sole proprietorship, partnership, farm, or interest in a S corporation, rental property, estate, or trust. Because estimates from the SOI programs are based on a sample of taxpayer data, they are subject to sampling errors. These sampling errors measure the extent to which the point estimates may vary from the actual values in the population of taxpayers. Each of our estimates are surrounded by a 95-percent confidence interval, which indicates that we can be 95 percent confident that the interval surrounding the estimate includes the actual population value. In some cases, the small number of taxpayers reporting the tax credits in the SOI sample resulted in large estimate intervals. To assess existing information on the tax incentives’ effectiveness as well as to identify any changes that may increase businesses’ awareness of future usage, we performed extensive literature, legislative history, and Internet searches and reviewed available studies. We also interviewed various groups interested in these issues using interview guides, with a standard set of questions for each group interviewed. We conducted interviews with federal agency officials in the Departments of Education, Labor, Justice, and the Treasury and in the Social Security Administration and the Equal Employment Opportunity Commission and with state agency officials from New York and California. Additional interviews were conducted with selected businesses, business groups, tax preparer groups, disability organizations, and academic experts who were knowledgeable about these incentives and disability issues in general. Among those we interviewed were (1) individuals from a variety of businesses, such as large businesses in the retail and computer industries and small to medium sized businesses in the consulting and engineering service industries; (2) business groups, including the U.S. Federation of Small Businesses, the Washington Business Group on Health, and the U.S. Chamber of Commerce; (3) disability organizations, including the American Association of People with Disabilities, the American Foundation for the Blind, the Paralyzed Veterans of America, the World Institute on Disability, and the Consortium of Citizens with Disabilities; and (4) academic experts at the Law, Health Policy, and Disability Center at the University of Iowa, the Rural Institute on Disabilities at the University of Montana, the Rehabilitation Research and Training Center at the Virginia Commonwealth University, and the Department of Policy Analysis and Management at Cornell University. The federal government provides many programs and incentives exclusively to persons with disabilities to enable them to enter or remain in the workforce. Persons with disabilities can take advantage of more than 100 federal programs. Many of these programs, such as those providing accessible housing, transportation, and independent living services, can help those with disabilities to become or remain employed.However, only a relatively small proportion of these federal programs are specifically focused on providing employment services exclusively to persons with disabilities. The Department of Education, the Department of Labor, the Department of Health and Human Services, and the Social Security Administration (SSA) administer most of the employment programs exclusively targeted to persons with disabilities, with services delivered by numerous public and private agencies at the state and local level. The Department of Education has a long standing involvement in, and numerous programs for, the rehabilitation and training of persons with disabilities. Its Vocational Rehabilitation Program is the largest federal effort for improving the employment of people with disabilities. Recently, the Department of Labor undertook two initiatives to improve the employment of persons with disabilities: (1) a series of projects under the Office of Disability Employment Policy, some of which are targeted to employers, as previously described and (2) Work Incentives Grants to give persons with disabilities better access to the one-stop centers where many of the federally funded employment and training programs are to be provided, as required by the Workforce Investment Act passed in 1998. Other recent legislation, the Ticket to Work and Work Incentives Improvement (TWWIIA) Act of 1999 created four new federal programs for persons with disabilities, as well as incentives to encourage persons with disabilities to work. Two of these programs, under the Department of Health and Human Services, are designed to provide services needed by workers with disabilities to become employed and to help those with severe impairments to maintain their employment. Two others, under SSA, are intended to build the infrastructure for the new ticket program to expand the availability of employment services for disability beneficiaries. This legislation also provides states with options for expanding medical coverage to working individuals with disabilities and adds to the work incentives available to persons who are receiving Supplemental Security Income (SSI) and Social Security Disability Insurance (DI), such as extending healthcare coverage an additional 4-1/2 years to DI recipients who have returned to work. In addition to these incentives, the government also provides a tax incentive to individuals who incur work- related accommodation expenses. The federal employment programs and incentives exclusively available to persons with disabilities are summarized in table 6. In addition to those named above, the following individuals made significant contributions to this report: Jeffrey Arkin, Julie DeVault, Patrick DiBattista, Patricia Elston, Corinna Nicolaou, Robert Tomco, Education, Workforce, and Income Security Issues: Wendy Ahmed, Luanne Moy, Ed Nannenhorn, James Ungvarsky, Anne Stevens, Applied Research and Methods: Shirley Jones and Behn Miller, General Counsel; and Thomas Bloom and Samuel Scrutchins, Tax Administration and Justice Issues.
More than 17 million working-age individuals have a self-reported disability that limits work. Their unemployment rate is also twice as high as for those without a work disability, according to recent Census data. In the Ticket to Work and Work Incentives Improvement Act of 1999, the Congress mandated that GAO study and report on existing tax incentives to encourage businesses to employ and accommodate workers with disabilities. This report provides information on (1) the current usage of the tax incentives, (2) the incentives' ability to encourage the hiring and retention of workers with disabilities, and (3) options to enhance awareness and usage of the incentives. A very small proportion of corporate and individual taxpayers with a business affiliation use the two tax credits that are available to encourage the hiring, retention, and accommodation of workers with disabilities, according to IRS data. Taxpayers in the retail and service industries accounted for the largest share of the work opportunity credits reported in 1999, while providers of health care and social assistance services accounted for the largest share of the disabled access credits. Information on the effectiveness of the incentives is limited and inconclusive. Only the work opportunity credit has been studied and these studies, along with those of a prior hiring credit, showed that some employers revised their recruitment, hiring, and training practices to increase the number of disadvantaged workers hired and retained, but that credits have also have been claimed by employers for workers they would have hired anyway. However, these studies have not focused on workers with disabilities and data limitations preclude conclusively determining their effectiveness for these workers. To increase the awareness and usage of the tax incentives, business representatives and experts on disability issues and tax incentives suggested (1) improving government outreach and education efforts; (2) increasing the maximum dollar amount of the incentives; and (3) expanding the types of workers, businesses, and accommodations that are eligible for the incentives. While these options may increase incentive usage, it is uncertain whether the potential loss in tax revenues would be offset by improvements in the employment of workers with disabilities. Commenting agencies generally concurred with GAO's findings.
In 2004, President George W. Bush announced his Vision for Space Exploration that included direction for NASA to pursue commercial opportunities for providing transportation and other services to support the space station after 2010. When the project was established in 2005, the approach that NASA laid out was a marked change in philosophy for how the agency planned to service the space station—by encouraging innovation in the private sector with the eventual goal of buying services at a reasonable price. As a result, the agency chose to utilize its other transaction authority under the National Aeronautics and Space Act of 1958, as opposed to a more traditional Federal Acquisition Regulation (FAR) based contract. Generally speaking, other transaction authority enhances the government’s ability to acquire cutting-edge science and technology, in part through attracting companies that typically have not pursued government contracts because of the cost and impact of complying with government procurement requirements. These types of agreements are not considered federal government contracts, and are therefore generally not subject to those federal laws and regulations that apply to federal government contracts. NASA established the Commercial Crew and Cargo program office at Johnson Space Center in 2005 and budgeted $500 million for fiscal years 2006 through 2010 for the development and demonstration of cargo transport capabilities. COTS partners, Orbital Sciences Corporation (Orbital) and Space Exploration Technologies Corporation (SpaceX), have also made significant investments in developing these capabilities. The COTS project was originally intended to be executed in two sequential phases: (1) private industry development of cargo transport capabilities in coordination with NASA and (2) procurement of commercial resupply services to the space station once cargo transport capabilities had been successfully demonstrated. In August 2006, NASA competitively awarded a $278 million Space Act agreement to SpaceX to develop and demonstrate end-to-end transportation systems, including the development of the Falcon 9 launch vehicle and Dragon spacecraft, ground operations, and berthing with the space station. In February 2008, NASA awarded a $170 million Space Act agreement to Orbital to develop two COTS cargo capabilities, unpressurized and pressurized cargo delivery and disposal, to culminate in one demonstration flight of its Taurus II launch vehicle and Cygnus spacecraft. Before either partner had successfully demonstrated its COTS cargo transport capabilities, the International Space Station program office awarded two CRS contracts in December 2008 to Orbital and SpaceX under a separate competitive procurement from COTS. These FAR-based contracts were for the delivery of at least 40 metric tons (approximately 88,000 pounds) to the space station between 2010 and 2015. Orbital was awarded 8 cargo resupply missions for approximately $1.9 billion and SpaceX was awarded 12 cargo resupply missions for approximately $1.6 billion. In June 2009, we found that while SpaceX and Orbital had made progress against development milestones, the companies were working under aggressive schedules and had experienced schedule slips that delayed upcoming demonstration launch dates by several months. In addition, we reported that the vehicles being developed through the COTS project were essential to NASA’s ability to fully utilize the space station after its assembly was completed and the space shuttle was retired. Finally, we found that NASA’s management of the COTS project generally adhered to critical project management tools and activities. Since our 2009 report, the two COTS project partners, Orbital and SpaceX, have made progress in the development of their respective vehicles. SpaceX successfully flew its first COTS demonstration mission in December 2010 and Orbital is planning to fly its COTS demonstration mission in December 2011. Both providers, however, are behind schedule—SpaceX’s first COTS demonstration mission slipped 18 months and Orbital’s first mission was initially planned for March 2011. Such delays are not atypical of development efforts, especially efforts that are operating under such aggressive schedules. Nonetheless, the criticality of these vehicles to the space station’s operations, as well as NASA’s ability to affordably execute its science missions has heightened the importance of their timely and successful completion and lessened the level of risk that NASA is willing to accept in this regard. As a result, the project recently requested and received an additional $300 million to augment the partner development efforts with, according to NASA, risk reduction milestones. SpaceX has successfully completed 18 of 22 milestones to date, but has experienced lengthy delays in completing key milestones since we last reported on the company’s progress in June 2009. SpaceX’s agreement with NASA established 22 development milestones that SpaceX must complete in order to successfully demonstrate COTS cargo capabilities. SpaceX’s first demonstration mission readiness review was completed 15 months behind schedule and its successful first demonstration mission was flown in December 2010, 18 months late. The company’s second and third demonstration missions have been delayed by almost 2 years to November 2011 and January 2012, respectively. Several factors contributed to the delay in SpaceX’s first demonstration mission readiness review and demonstration mission. These factors include, among others, delays associated with (1) launching the maiden Falcon 9 (non-COTS mission), such as Falcon 9 software and database development; (2) suppliers; (3) design instability and production; (4) Dragon spacecraft testing and software development; and (5) obtaining flight safety system approval. For example, SpaceX encountered welding issues during production of the Dragon propellant tanks and also had to redesign the Dragon’s battery. In preparing for its second COTS demonstration flight, SpaceX has experienced additional design, development, and production delays. For example, several propulsion-related components needed to be redesigned, the Dragon spacecraft’s navigation sensor experienced development testing delays, and delays were experienced with launch vehicle tank production. For example, SpaceX’s decision to incorporate design changes to meet future CRS mission requirements has delayed the company’s second demonstration mission. Integration challenges on the maiden Falcon 9 launch and the first COTS demonstration mission have also kept SpaceX engineers from moving on to the second COTS demonstration mission. SpaceX officials cited the completion of Dragon development efforts, NASA’s safety verification process associated with berthing with the space station, and transitioning into efficient production of the Falcon 9 and Dragon to support space station resupply missions as key drivers of technical and schedule risk going forward. For completing 18 of the 22 milestones, SpaceX has received $258 million in milestone payments thus far, with $20 million yet to be paid. Appendix I describes SpaceX’s progress meeting the COTS development milestones in its agreement with NASA. Orbital has successfully completed 15 of 19 COTS milestones to date—8 more than when we initially reported on the program in June 2009. Programmatic changes and developmental difficulties, however, have led to multiple delays of several months’ duration and further delays are projected for completing the remaining milestones. For example, according to Orbital officials, the demonstration mission of Orbital’s Taurus II launch vehicle and Cygnus spacecraft has been delayed primarily due to an increase in design effort to develop a pressurized cargo carrier in place of the original Cygnus unpressurized cargo design. After NASA awarded Orbital a CRS contract for eight pressurized cargo missions, NASA and Orbital amended their COTS demonstration agreement to replace the unpressurized cargo demonstration mission with a pressurized cargo demonstration. This delayed existing milestones, and the schedule was revised to shift the COTS demonstration mission from December 2010 to March 2011. Since that time, the schedule for some of Orbital’s milestones has been revised again and the demonstration mission is now planned for December 2011. COTS program and Orbital officials also noted technical challenges as reasons for milestone delays. For example, Orbital officials said there are several critical Taurus II engine and stage one system tests that need to be completed by the end of the summer, but that the risk inherent in these tests is mitigated through an incremental approach to testing. Specifically, single engine testing has been successfully completed, and testing will be extended this summer to the full stage one (i.e., two-engine) testing. COTS program and Orbital officials also noted delays in Cygnus avionics manufacturing, primarily driven by design modifications aimed at increasing the safety and robustness of the system. According to these officials, integration and assembly of the first Cygnus spacecraft has begun and is now in the initial electrical testing phase. Additionally, the completion of the company’s launch facilities at the Mid- Atlantic Regional Space Port in Wallops Island, Virginia, remains the key component of program risk. NASA COTS program and Orbital officials cite completion of the Wallops Island launch facilities as the critical factor for meeting the COTS demonstration mission schedule. Orbital officials said additional resources have been allocated to development of the launch complex to mitigate further slips, and an around-the-clock schedule will be initiated later this summer to expedite the completion of verification testing of the liquid fueling facility, which is the primary risk factor in completing the launch facility. For completing 15 of the 19 milestones, Orbital has received $157.5 million, with $12.5 million remaining to be paid. Appendix I depicts Orbital’s progress in meeting the COTS development milestones in its agreement with NASA. In addition to the prior milestones negotiated under the COTS project, NASA has amended its agreements with SpaceX and Orbital to include a number of additional milestones aimed at reducing remaining developmental and schedule risks. COTS officials told us that some milestones reflect basic risk reduction measures, such as thermal vacuum testing, that NASA would normally require on launch vehicle or spacecraft development. A series of amendments were negotiated from December 2010 to May 2011 after Congress authorized $300 million for commercial cargo efforts in fiscal year 2011. These amendments add milestones to (1) augment ground and flight testing, (2) accelerate development of enhanced cargo capabilities, or (3) further develop the ground infrastructure needed for commercial cargo capabilities. These milestones were added incrementally due to NASA operating under continuing resolutions through the first half of fiscal year 2011. In May 2009, the President established a Review of U.S. Human Space Flight Plans Committee composed of space industry experts, former astronauts, government officials, and academics. In its report, the committee stated that it was concerned that the space station, and particularly its utilization, may be at risk after Shuttle retirement as NASA would be reliant on a combination of new international vehicles and as- yet-unproven U.S. commercial vehicles for cargo transport. The committee concluded that it might be prudent to strengthen the incentives to the commercial providers to meet the schedule milestones. NASA officials stated that if funding were available, negotiating additional, risk reduction milestones would improve the chance of mission success, referring specifically to the companies’ COTS demonstration missions. Of the $300 million, $236 million, divided equally between SpaceX and Orbital, will be paid upon completion of the additional milestones. Additionally, NASA officials stated the International Space Station program office will pay SpaceX and Orbital $10 million each to fund early cargo delivery to the space station on the companies’ final COTS demonstration missions. The COTS program manager stated that SpaceX and Orbital recognize their responsibility under the COTS agreements for any cost overruns associated with their development efforts, and that the companies did not come to NASA with a request for additional funding. SpaceX has completed 4 of its new milestones on time but has experienced minor delays in completing 3 others. SpaceX’s agreement with NASA was amended three times between December 2010 and May 2011 to add 18 development milestones that SpaceX must complete in order to successfully demonstrate COTS cargo capabilities. Some of the new milestones, for example, are designed to increase NASA’s confidence that SpaceX’s Dragon spacecraft will successfully fly approach trajectories to the space station while others are intended to improve engine acceptance rates and vehicle production time frames. Milestones completed thus far include a test of the spacecraft’s navigation sensor and thermal vacuum tests. For completing 7 of the 18 milestones, SpaceX has received $40 million in milestone payments thus far, with $78 million yet to be paid. Orbital has completed 4 of its 10 new milestones on schedule and 1 of the new milestones was delayed by about 1 month. In concurrence with NASA’s request, Orbital agreed to add an initial flight test of the Taurus II launch vehicle to reduce overall cargo service risk. The test flight not only separates the risks of the first flight of Taurus II from the risks of the first flight of the Cygnus spacecraft, but provides the opportunity to measure the Taurus II flight environments using an instrumented Cygnus mass simulator. The Taurus II test flight is scheduled for October 2011. Overall technical risks associated with Cygnus development are expected to be reduced through additional software and avionics tests. Milestones completed thus far include early mission analyses and reviews, as well as delivery of mission hardware. For completing the first 5 new milestones, Orbital has received $69 million, with $49 million remaining to be paid. Appendix I describes SpaceX’s and Orbital’s progress meeting the new COTS development milestones in their agreements with NASA. Based on the current launch dates for SpaceX’s and Orbital’s upcoming COTS demonstration missions, it is likely that both commercial partners will not launch their initial CRS missions on time, but NASA has taken steps to mitigate the short-term impact to the space station. The launch window for SpaceX’s first CRS flight is from April to June 2011 and from October to December 2011 for its second CRS flight. These launch windows are either scheduled to occur before or during SpaceX’s upcoming COTS demonstration flights and thus will need to be rescheduled. In the case of Orbital, NASA officials told us that the launch window for its first CRS flight is from January to March 2012, but will likely slip from those dates given the Taurus II test flight added to its milestone schedule. NASA officials added that once SpaceX and Orbital have finished completing their COTS demonstration flights, NASA will have to renegotiate the number of flights needed from each partner and re- baseline the launch windows for future CRS missions. International Space Station program officials told us they have taken steps to mitigate the short-term impact of CRS flight delays through prepositioning of cargo on the last space shuttle flights, including cargo that is being launched on the planned contingency space shuttle flight in early July 2011. Officials added that these flights and the planned European Space Agency’s Automated Transfer Vehicle and Japan’s H-II Transfer Vehicle flights in 2012 will carry enough cargo to sustain the six person space station crew through 2012 and to meet science-related cargo needs through most of 2012. Despite these steps, NASA officials said they would still need one flight each from SpaceX’s and Orbital’s vehicles in order to meet science-related cargo needs in 2012. Beyond 2012, NASA is highly dependent on SpaceX’s and Orbital’s vehicles in order to fully utilize the space station. For example, we reported in April 2011 that 29 percent of the flights planned to support space station operations through 2020 were dependent on those vehicles. In addition, NASA officials confirmed that the agency has no plans to purchase additional cargo flights on Russian Progress vehicles beyond 2011 and the European Space Agency and the Japan Aerospace Exploration Agency have no current plans to manufacture additional vehicles beyond their existing commitments or to accelerate production of planned vehicles. We reported previously that if the COTS vehicles are delayed, NASA officials said they would pursue a course of “graceful degradation” of the space station until conditions improve. In such conditions, the space station would only conduct minimal science experiments. NASA’s intended use of the COTS Space Act agreements was to stimulate the space industry rather than acquiring goods and services for its direct use. Traditional FAR contracts are to be used when NASA is procuring something for the government’s direct benefit. NASA policy provides that funded Space Act agreements can only be used if no other instrument, such as a traditional FAR contract, can be used. Therefore, Space Act agreements and FAR-based contracts are to be used for different purposes. In considering the use of funded Space Act agreements for COTS, NASA identified several advantages. For example: The government can share costs with the agreement partner with fixed government investment. Payment to partner is made only after successful completion of performance-based milestones. The government can terminate the agreement if the partner is not reasonably meeting milestones. Limited government requirements allow optimization of systems to meet company’s commercial business needs. These types of agreements can also have disadvantages, however. For example, Space Act agreements may have more limited options for oversight as compared to other science mission and human spaceflight development efforts that are accomplished under more traditional FAR contracts. NASA identified other disadvantages of using a Space Act agreement. For example: The government has limited ability to influence agreement partners in their approach. The government lacks additional management tools (beyond performance payments at milestones) to incentivize partners to meet technical and schedule performance. Given the intended goals of the project and the availability of alternative vehicles to deliver goods to the space station when the COTS agreements were signed, NASA was willing to accept the risks associated with the disadvantages of using a Space Act agreement. As the project has progressed, however, and these alternatives are no longer viable or available, NASA has become less willing to accept the risks involved. As a result, the agency took steps aimed at risk mitigation, primarily through additional funding. I would like to point out that neither Space Act agreements nor more traditional FAR contracts guarantee positive outcomes. Further, many of the advantages and disadvantages identified by NASA for using a Space Act agreement can also be present when using FAR-based contracts, depending on how the instrument is managed or written. For example, both a FAR contract and a Space Act agreement can provide for cost sharing and the government also has the ability to terminate a FAR contract or a Space Act agreement if it is dissatisfied with performance. The ineffective management of the instrument can be an important contributor to poor outcomes. For example, although a Space Act agreement may lack management tools to incentivize partners, we have reported in the past that award fees, which are intended to incentivize performance on FAR-based contracts, are not always applied in an effective manner or even tied to outcomes. Additionally, the oversight that NASA conducts under a FAR-based contract has not always been used effectively to ensure that projects meet cost and schedule baselines. Even with the advantages and disadvantages that can be present in various instruments, given a critical need, the government bears the risk for having to make additional investments to get what it wants, when it wants it. The additional investment required, however, can be lessened by ensuring that accurate knowledge about requirements, cost, schedule, and risks is achieved early on. We have reported for years that disciplined processes are key to ensuring that what is being proposed can actually be accomplished within the constraints that bind the project, whether they are cost, schedule, technical, or any other number of constraints. We have made recommendations to NASA and NASA is taking steps to address these recommendations to help ensure that these fundamentals are present in its major development efforts to increase the likelihood of success. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions you may have at this time. For questions about this statement, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Individuals making key contributions to this statement include Shelby S. Oakley, Assistant Director; Jeff Hartnett; Andrew Redd; Megan Porter; Laura Greifner; and Alyssa Weir. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Since the National Aeronautics and Space Administration (NASA) created the strategy for the Commercial Orbital Transportation Services (COTS) project in 2005, the space landscape has changed significantly--the Space Shuttle program is retiring and the Ares I will not be available--increasing the importance of the timely development of COTS vehicles. The lack of alternatives for supplying the International Space Station and launching science missions have all contributed to an increased need for the COTS vehicles. The two COTS project partners, Orbital and SpaceX, have made progress in the development of their respective vehicles; however, both providers are behind schedule. As a result, the project recently received an additional $300 million to augment development efforts with risk reduction milestones. This testimony focuses on: (1) COTS development activities, including the recent funding increase; (2) the extent to which any COTS demonstration delays have affected commercial resupply services (CRS) missions and NASA's plans for meeting the space station's cargo resupply needs; and (3) lessons learned from NASA's acquisition approach for COTS. To prepare this statement, GAO used its prior relevant work and conducted additional audit work, such as analyzing each partner's agreement with NASA and interviewing NASA officials. New data in this statement was discussed with agency and company officials who provided technical comments, which we included as appropriate. SpaceX and Orbital continue to make progress completing milestones under their COTS agreements with NASA, but both partners are working under aggressive schedules and have experienced delays in completing demonstration missions. SpaceX successfully flew its first demonstration mission in December 2010, but the mission was 18 months late and the company's second and third demonstration missions have been delayed by almost 2 years due to design, development, and production challenges with the Dragon spacecraft and Falcon 9 launch vehicle. Orbital faced technical challenges developing the Taurus II launch vehicle and the Cygnus spacecraft and in constructing launch facilities, leading to multiple delays in completing program milestones, including its demonstration mission. NASA has amended its agreements with the partners to include a number of new milestones, such as additional ground and flight tests, to reduce remaining developmental and schedule risks; most of the new milestones completed thus far were finished on time, but many milestones remain. Based on the current launch dates for SpaceX's and Orbital's upcoming COTS demonstration missions, it is likely that neither will launch its initial CRS mission on time, but NASA has taken steps to mitigate the short-term impact to the space station. The launch windows for SpaceX's first and second CRS flights are scheduled to occur either before or during its upcoming COTS demonstration flights and will need to be rescheduled. Orbital's first CRS flight will also likely shift due to a Taurus II test flight. NASA officials said that the agency will have to renegotiate the number of flights needed from each partner and re-baseline the launch windows for future CRS missions once COTS demonstration flights are completed. NASA has taken steps to mitigate the short-term impact of CRS delays through prepositioning of cargo, some of which will be delivered on the last space shuttle flight. Despite these efforts, NASA officials said they would still need one flight in 2012 from SpaceX's and Orbital's vehicles to meet science-related cargo needs. In considering the use of a Space Act agreement for COTS, NASA identified several advantages. These advantages include sharing costs with agreement partners and promoting innovation in the private sector. A disadvantage, however, is that NASA is limited in its ability to influence agreement partners in their approach. At the time the agreements were awarded, NASA was willing to accept the risks of using a Space Act agreement given the goals of the project and alternative vehicles that were available to deliver goods to the space station. As the project has progressed, however, and these alternatives are no longer viable or available, NASA has become less willing to accept the risk involved and has taken steps aimed at risk mitigation. Given a critical need, the risk is present that the government will be required to make additional investments to meet mission needs. The amount of investment can be lessened by ensuring that accurate knowledge about requirements, cost, schedule, and risks is achieved early on. GAO has made recommendations to NASA and NASA is taking steps to help ensure that these fundamentals are present in its major development efforts to increase the likelihood of success.
FAS administers USDA’s five market development programs on behalf of the Commodity Credit Corporation, which is owned and operated by the U.S. government. The programs provide matching funds to support U.S. industry efforts to build, maintain, and expand commercial overseas markets for U.S. agricultural products, with the overarching goal of increasing agricultural exports. Congress authorizes a maximum level of the corporation’s funds to be used for USDA’s market development programs, with the exception of QSP, through 5-year farm bills. (Table 1 shows authorizations for the five programs for fiscal years 2002 through 2012.) Many other countries also provide government funding to promote agricultural exports that compete with U.S exports in the world market. The World Trade Organization does not consider such expenditures to be trade distorting and therefore does not restrict these expenditures, according to USDA officials. Participants in these programs include nonprofit agricultural trade associations; agricultural cooperatives that promote their own brand name; and state regional trade groups. The majority of market development funds are used for promotion of generic U.S. commodities, with no emphasis on a particular brand; however a portion of MAP funds may be used for promotion of branded products. When considering applications for funding, FAS gives priority to applicants with the broadest producer representation and affiliated industry participation of the commodity being promoted. Appendix II shows participants in the five market development programs in fiscal year 2012 and their award amounts. These organizations may participate in more than one of the five market development programs. After approving an application for participation in a market development program, FAS sets the participant’s funding level and signs a program agreement with the participant. FAS provides a program approval letter, which outlines approved activities and their budget levels, and program funds are expended through reimbursement of the participant’s expense claims for approved activities. The five programs have different requirements related to participants’ matching contributions, which FAS refers to as “cost-sharing”; these requirements ensure that program funds are supplemental. MAP requires participants that receive funding for promotion of generic products to make contributions to the program that are worth at least 10 percent of the funding they receive, although FAS encourages participants to commit in their program applications to contributing more than the minimum required. Eligible contributions include cash; the cost of acquiring materials; and in-kind contributions, such as professional staff time spent on design and execution of activities. The MAP branded products program and FMD require participants to make a minimum contribution of 50 percent. EMP, TASC, and QSP do not require minimum or maximum contributions, but applicants are expected to propose the amount they will contribute. For all five programs, the contribution levels that participants commit to is an important factor FAS considers in approving applications for funding, according to FAS officials. In addition, MAP and FMD participants must certify that program funds supplement, and do not supplant, any private funds, while applications for the other three programs must state why the applicants could not achieve their objectives without government funds. In addition to having different contribution requirements, the five market development programs have different funding levels, objectives, and criteria for approving applications for funding. Market Access Program. MAP is the largest of the five programs, with a current annual authorization of $200 million—about 80 percent of USDA’s total annual market development funding. In fiscal year 2012, 66 program participants received MAP awards, which ranged from about $17,000 to almost $20 million (see app. II). MAP was established in 1985 to aid in the development, expansion, and maintenance of foreign markets for U.S. agricultural commodities and products by sharing the costs of overseas marketing and promotional activities. A portion of MAP funds is used for promotion of brand- name products by cooperatives or by small, for-profit businesses that apply through state regional trade groups or other MAP participants. In addition, unlike participants in the MAP generic products program, small businesses promoting branded products are subject to a “graduation requirement,” which limits them to no more than 5 years of promotions in a given country. The MAP regulations for market development for generic and branded products identify eligible expenditures and criteria that FAS is to consider in approving applications and determining funding levels. Eligible expenditures include, among others, advertising, point-of-sale materials, in-store and food service promotions and product demonstrations, seminars and educational training, participation in trade shows, market research, and independent evaluations and audits. The process for approving applications for MAP funding involves applying a variety of qualitative criteria, including the adequacy of the applicant’s plan for addressing market constraints and opportunities, prior export promotion experience, past program results, and the suitability of the applicant’s plan for performance measurement. The MAP regulations also list quantitative criteria for determining award amounts for qualified applicants, including the size of the budget request relative to the projected value of exports of the commodity being promoted, the size of the budget request relative to the actual value of exports of the commodity in prior years, and the applicant’s proposed contribution level. Foreign Market Development Program. FMD, which was established in 1954, provides $34.5 million per year to nonprofit agricultural associations representing U.S. agricultural producers and processors, to create, expand, and maintain long-term export markets primarily for generic bulk commodities. In fiscal year 2012, 24 FMD participants received award amounts ranging from about $16,000 to more than $5 million (see app. II). FMD allows many of the same expenditures as MAP, such as market research and product demonstrations; however, unlike MAP, FMD funds may not be used for activities targeted directly at consumers. The qualitative criteria for approving applications for participation in FMD and the quantitative factors for determining award amounts are also similar to those for MAP. Examples of these quantitative factors include the applicant’s contribution level and the value of exports being promoted. Emerging Markets Program. EMP, which was established in 1990, provides up to $10 million annually to U.S private-sector, university, or government entities for technical assistance activities intended to promote exports of U.S. agricultural commodities and products in emerging markets by improving their food and business systems and reducing potential trade barriers. In 2012, FAS awarded EMP funds to 35 entities, some of which received funding for more than one EMP project, with total awards per participant ranging from $14,000 to about $500,000 (see app. II). Types of projects funded may include feasibility studies, market research, sector assessments, orientation visits, specialized training, business workshops, and similar undertakings. EMP is not intended for projects targeted at end-user consumers. Ineligible expenses include branded product promotions (e.g., in-store promotions, restaurant advertising, labeling); advertising, administrative, and operational expenses for trade shows; website development; equipment purchases; and the preparation and printing of brochures, flyers, and posters. The EMP regulations list the criteria FAS is to consider in reviewing applications for funding. Among these criteria are the applicant’s willingness to contribute resources; the degree to which the proposed project is likely to contribute to the development, maintenance, or expansion of U.S. agricultural exports to emerging markets; and a demonstration of how the proposed project will benefit a particular industry as a whole. Individual projects are unlikely to be approved at levels above $500,000, and funding for continuing and substantially similar projects is generally limited to 3 years. Quality Samples Program. QSP, which was established in 1999, currently provides $2 million annually to assist U.S. organizations in supplying commodity samples to potential foreign importers. Projects focus on industry and manufacturing, rather than on end-use consumers, and are intended to promote U.S. food and fiber products. In fiscal year 2012, 12 program participants received QSP funding, in most cases for multiple projects, with total awards per participant ranging from $5,000 to $460,000 (see app. II). QSP funding for individual projects is limited to $75,000, and the projects should be completed within a year of approval by FAS. Eligible expenditures include the sample purchase price and the cost of transporting the samples domestically to the port of export and from there to the foreign port or point-of-entry. Samples provided in a QSP project may not be directly used as part of a retail promotion or supplied directly to consumers. The annual QSP Notice of Funds Availability spells out the criteria that FAS is to use for approving applications for QSP funding. These criteria include, among others, the potential for expanding commercial sales in the proposed market; the importer’s contribution in terms of handling and processing the sample; the amount of funding requested and the applicant’s willingness to contribute resources; and how well the proposal’s technical assistance will demonstrate the intended end-use benefit. Technical Assistance for Specialty Crops Program. TASC, which was established in 2002, is currently authorized under the 2008 farm bill, as extended by the American Tax Payer Relief Act of 2012, to provide a maximum of $9 million to U.S. entities, for projects that address sanitary, phytosanitary, and technical barriers that prohibit or limit U.S. specialty crop exports. Any U.S. organization may receive TASC funding, including, but not limited to, U.S. government and state government agencies, nonprofit trade associations, universities, agricultural cooperatives, and private businesses. In 2012, FAS awarded funds to 24 participants, some of whom received funding for multiple projects, and total funding awarded to each participant ranged from about $1.3 million to $14,000 (see app. II). FAS will not consider proposals for TASC funding that exceed $500,000 in a given year. Examples of eligible expenditures include seminars and workshops, study tours, field surveys, development of pest lists, and pest and disease research. Certain types of expenses are not eligible for reimbursement, such as the costs of market research, advertising, and other promotional expenses. The TASC regulations list a variety of criteria that FAS is to consider in evaluating applications for funding, including, among others, the viability and completeness of the proposal, the potential trade impact of the project on issues such as market retention, and the cost and level of contributions from the applicant. Participants in USDA’s market development programs use program funds to support a variety of activities intended to raise awareness or acceptance of U.S. agricultural products in overseas markets. MAP and FMD participants, their share of program expenditures, and the countries where they spent the majority of program funds remained relatively consistent from 2007 through 2011. Unlike funds for the other programs, a portion of MAP funds is used for promotion of branded products. In 2011, MAP participants spent about 85 percent of program funding on overseas promotion of generic commodities; more than 600 small companies and seven agricultural cooperatives spent the remaining 15 percent of MAP funding to promote branded products. MAP and FMD participants met or exceeded those programs’ requirements for minimum matching contributions. Appendix III shows EMP, QSP, and TASC participant expenditures in 2011. Market development program participants have used program funds to conduct a variety of activities intended to raise awareness or acceptance of U.S. agricultural products in overseas markets. Participants have also used program funds to address technical barriers that prohibit or limit specialty crop exports. Many program participants receive funding from more than one of the five market development programs. For instance, in fiscal year 2012, 22 of the 66 MAP participants received funds from FMD, and 22 of the 24 FMD participants received funds from MAP. In addition, all 12 QSP participants, 6 of the 24 TASC participants, and 18 of the 35 EMP participants received funds from at least one other program (see app. II for additional details). The following paragraphs present examples of five participants’ use of 2011 program funds for market development efforts in Japan and Mexico. Common activities undertaken included, among others, market research, consumer and retail promotion, participation in international trade shows, and reverse trade missions, in which foreign buyers visit U.S. agricultural producers. The American Hardwood Export Council used more than $1.7 million in 2011 MAP and FMD funds for multiple generic product promotional efforts in Japan. According to a council representative, consumers in Japan value wood products from trees that are harvested legally and sustainably, which provides a marketing advantage for American hardwood compared with woods from tropical competitors. We visited furniture stores in Japan displaying the American Hardwood Export Council’s informational handouts, which highlight the sustainability and legality of American hardwoods used in the furniture. The council’s efforts in Japan also include educating designers and architects about the environmental advantage—that is, the smaller carbon “footprint”—of sustainable wood products compared with synthetic material. The council also conducts educational efforts aimed at explaining to Japanese furniture and flooring manufacturers, designers, and architects that discoloration and curving grains are wood characteristics rather than imperfections, because, according to a council representative, straight wood grain has traditionally been favored in Japan. California Table Grape Commission used about $271,000 in 2011 MAP funds for generic product promotional activities in Mexico. Commission representatives informed us that in-store promotional activities are the most effective means of reaching the customer. Promotional activities include in-store grape display competitions as well as promotions with other U.S. fruit groups, such as apples and pears. The commission also used MAP funding to conduct in-store grape sampling demonstrations at major retail chains throughout Mexico to demonstrate the quality of California grapes. In 2010 and 2011, FAS authorized the commission to use TASC funds for activities to remove, resolve, or mitigate sanitary, phytosanitary, and related barriers that prohibit or threaten the export of U.S. specialty crops in multiple countries. In 2011, the commission received an allocation of more than $363,000 for a multiyear TASC project to conduct research and provide the ancillary staffing and supplies needed to identify postharvest treatment protocols to eliminate invasive pests in U.S. grape exports. Cotton Council International used more than $2.7 million in MAP and FMD funds in 2011 for generic product promotion activities in Japan. These activities—such as educating Japanese consumers about the benefits and unique characteristics of cotton versus other fibers and conducting advertising, public relations, and promotions—were intended to increase consumer preference for cotton and retailer demand for fabrics made from U.S. cotton. According to representatives from Cotton Council International, increasing demand for clothing made with U.S. cotton in a large consumer market, such as Japan, also increases exports of cotton fiber to other countries that manufacture cotton garments for sale to retail buyers in Japan. The Western United States Agricultural Trade Association (WUSATA) provided a total of about $926,000 in MAP funds for market development activities in Japan and Mexico in 2011. WUSATA, which is one of four state regional trade groups with responsibility for supporting MAP branded product promotion for small businesses, directed more than half of this funding to 34 small businesses to support their branded product promotions in Japan and Mexico. WUSATA allocates the majority of its annual MAP funds to more than 200 small businesses and cooperatives based in 13 western states, according to WUSATA officials. WUSATA also uses some of its MAP funds for generic product promotion, primarily for participation in numerous international trade shows and for inbound and outbound trade missions. In addition, WUSATA devotes some MAP funding for generic product promotion and outreach efforts to small businesses to encourage them to consider exporting their products and use assistance from the MAP program. WUSATA officials noted that many businesses are unaware of their products’ overseas market potential. The Wine Institute used more than $106,000 in 2011 MAP funds for generic product promotional activities in Mexico. In Mexico, we observed a Wine Institute-sponsored promotional event in Mexico City to facilitate trade contacts between California wine label representatives and Mexican wine importers. The event was intended to generate publicity for California wines and increase consumer awareness. In Japan, where consumers are most familiar with European wines, the Wine Institute worked with restaurants to promote California wines, according to a Wine Institute representative. The Wine Institute uses some of its MAP funding to support branded product promotion by small businesses. In 2011, the Wine Institute also received a $500,000 TASC allocation for a 5-year project to prepare and file petitions to the Japanese government to allow the sale of U.S. wines containing certain additives that are commonly used by U.S. producers. MAP and FMD participants and their share of market development program expenditures remained relatively consistent from 2007 through 2011, with many of the same participants receiving the majority of funding each year. Expenditures by the 10 participants that spent the largest amounts of funding from the two programs in 2011 represented 54 to 57 percent of those programs’ total expenditures in 2007 through 2011 (see table 2). According to FAS officials, these 10 participants also reflect the top 10 U.S. exports of agricultural products in 2011, although not in the same rank order. An FAS official noted that MAP and FMD typically provide ongoing support for program participants that seek not only to open new overseas markets but also to maintain export market share. These participants typically receive funding every year. According to FAS officials, although a variety of both qualitative and quantitative factors affect the level of funding provided to participants each year, FAS seeks to provide a stable level of funding to support participants’ multiyear market development strategies. Our analysis of expenditure data from 2002 through 2011 shows that MAP and FMD participants spent market development funds throughout the world, consistently spending more than half of the funds in the same 10 countries. (Table 3 shows MAP and FMD expenditures in these countries in 2011.) The expenditures in these 10 countries accounted for 66 percent, on average, of total MAP and FMD expenditures from 2002 through 2011. According to an FAS official, participants are encouraged to direct program funds to markets where they will have the greatest impact on increasing exports. This official noted that, although participants use MAP and FMD funds in a variety of export markets, the majority of their funds are expended in countries with the largest export markets for U.S. agricultural products. In 2011, about 15 percent of total MAP expenditures were used to promote branded products. The four state regional trade groups and five of the agricultural trade associations in the MAP program allocated a portion of their MAP funding to small businesses to promote branded products in foreign markets. Specifically, these groups allocated a total of more than $22.8 million in MAP funds for branded product promotions in 2011 to 644 small businesses. These small businesses’ average expenditure in 2011 was about $25,000 and their median expenditure was $33,000. Small businesses use MAP funding for a variety of activities, including participation in trade shows, buying missions, advertising, and in-store demonstrations and promotions. In addition, seven agricultural cooperatives—Sunkist Growers, Inc.; Blue Diamond Growers; Sunsweet Growers, Inc.; Sun-Maid Growers of California; Welch Foods, Inc.; Ocean Spray Cranberries, Inc.; and Cal-Pure Pistachios, Inc.—spent about $6.4 million in 2011 to promote their own brands. According to FAS officials, cooperatives’ activities to promote their products using a brand name are often similar to the activities of trade associations promoting generic commodities. Table 4 shows the 16 organizations that participated in the MAP branded products program in 2011, the portions of their total MAP expenditures that were used for promotions of branded products, and the numbers of small businesses that the participants’ MAP branded products program expenditures supported. From 2002 through 2011, a total of 2,131 unique small businesses received funding for promotional activities through the MAP branded products program. Many of the small businesses participated in the MAP branded products program for multiple years. Of the 2,131 businesses, 41 percent were involved in the branded products program for 1 year, and 59 percent were involved for more than a year. In 2011, 153 small businesses expended MAP branded program funds for the first time. The MAP branded products program had, on average, about 638 unique small businesses per year and supported activities throughout the world. The largest expenditures of program funding for MAP branded products were directed to 8 of the 10 countries with the largest expenditures of total MAP and FMD funding, shown in table 3. From 2002 through 2011, small businesses in the MAP branded products program reached the 5-year limit for promoting a product in a given country—known as the program’s “graduation requirement”—in 1,121 instances. These instances involved 569 businesses in about 80 countries. During this period, 64 businesses used MAP branded products program funding for more than 5 years in a given country in 82 instances. According to FAS, participation in certain international trade shows is exempt from the graduation requirement for the MAP branded products program. In 2011, all MAP and FMD participants met or exceeded the programs’ required minimum matching contribution levels. The average contribution level for MAP participants was about 191 percent of MAP expenditures in 2011, and the median contribution level was about 134 percent.The majority of these participants contributed more than 100 percent of their total expenditures. The average contribution level for all FMD participants in 2011 was 316 percent, and the median was 232 percent. Nearly all FMD participants provided matching cash and in-kind contributions of more than 100 percent of total expenditures. Since 2002, MAP participants’ total contributions have ranged from 138 percent to 198 percent of their total MAP expenditures, and FMD participants’ total contributions have ranged from 123 percent to 192 percent of their total FMD expenditures. Table 5 compares MAP and FMD participants’ contributions and expenditures in 2002 through 2011. FAS has established processes to reduce risks of duplication among the five market development programs, to monitor participant expenditures, and to assess program results. FAS’s integrated approach includes a unified database and application process to help mitigate risks of duplication. In addition, FAS works with participants in the MAP branded products program to ensure that the small businesses they support are not receiving funds for similar activities from more than one source; our review of 2011 data found no small businesses receiving funds from multiple sources. FAS also conducts regular compliance reviews to verify participants’ program expenditures and contributions. FAS guidelines require program participants to submit annual progress reports assessing results for each country where they conduct market development activities. In the progress reports that we reviewed, program participants’ performance measures generally reflected requirements in FAS guidelines as well as key attributes of successful performance measurement that we identified in previous GAO reports. However, 149 of the 373 performance measures in the reports that we reviewed did not clearly identify, as the FAS guidelines require, the methodologies used to assess results for each performance measure, making it difficult to verify the reported results. FAS guidelines also require MAP and FMD participants to conduct comprehensive evaluations of their program- funded market development activities when appropriate. FAS integrates its management processes to reduce the risk of duplication among the market development programs, given that many participants receive funding from more than one program. Because MAP and FMD support many of the same goals and allowable expenses and most FMD participants also participate in MAP, the greatest risk of duplication is between these two programs. To reduce this risk, FAS uses an integrated online system, known as the Unified Export Strategy (UES) system, which participants typically use to apply for funding for any of the five market development programs. For example, a participant seeking funding for both FMD and MAP submits a single application through the UES system, explaining how it intends to use both programs to support its foreign market development objectives. FAS’s review of these funding applications allows it to prevent duplicative programming, according to FAS officials. FAS officials also noted that only expenses for pre- approved activities may be reimbursed and that the UES system associates each approved activity with the particular program for which it was approved. In addition, FAS agricultural attachés based in overseas posts review and comment on the portions of participants’ applications that apply to their countries and regions. This provides an additional layer of review that helps prevent duplicative programming, according to FAS officials. FAS also takes steps to ensure that small businesses participating in the MAP branded program do not obtain funding from more than one source—such as two state regional trade groups—for promotion in the same country. To prevent such duplicative funding, FAS requires that the four state regional trade groups provide the names of all businesses and products participating in their branded promotion programs each year. According to an FAS official, FAS also participates in regular conference calls with the four state regional trade groups, during which they compare lists of small businesses applying for branded products program funding. In addition, FAS circulates a memo annually to the four groups, stating that businesses that promote certain product types should seek funding from specific commodity groups before applying for funds from the state regional trade groups. For example, FAS’s memo in 2012 stated that small businesses promoting dairy, livestock, meat, poultry, seafood, and egg products should be referred first to the applicable commodity groups before applying for funding from a state regional trade group. In reviewing expenditure data for MAP branded product promotions for the 2011 program year, we found no instances in which small businesses obtained funding from multiple sources to promote the same products in the same countries. FAS performs financial and compliance reviews to verify that participants claimed reimbursement for expenses appropriately, and it holds participants accountable for maintaining proper documentation of all of their reimbursement claims. According to an FAS official, FAS’s independent Compliance Review Branch has a staff of eight officers, including the branch chief, who periodically visit participant sites to verify that all expenses submitted for reimbursement are authorized, reasonable, and documented. These compliance reviews cover all market development programs in which the participant was involved, enabling the compliance officers to verify that all reimbursement claims were paid for pre-approved expenses for each program. The reviews also verify that participants’ reported contributions are properly documented, are based on allowable expenses, and match the amounts that the participants committed to in their market development program applications. In addition, the compliance officers verify that participants that spent $500,000 or more of federal funds from one or more sources in a single year have been audited in accordance with Office of Management and Budget Circular A-133. Our review of FAS documentation for five program participants showed that FAS conducted compliance reviews of these participants between May 2011 and March 2012. According to the Compliance Review Branch Chief, compliance officers typically conduct these reviews every 3 years for the smaller participants and verify 100 percent of those participants’ expenses. Compliance officers conduct reviews more frequently for the larger participants because of the volume of reimbursement claims involved, and they may review only a sample of those participants’ expenses. Participants must return to FAS any reimbursements for claims found not to be allowable. The Compliance Review Branch Chief stated that, although participants have the right to a hearing to contest compliance review results, they generally repay the rejected claims under an agreed timeframe. The Chief also noted that, because participants typically apply for future funding from the programs, they have an incentive to comply with FAS requirements. The performance measures in the progress reports that we reviewed generally met criteria based on FAS guidance for progress reports and key attributes of successful performance measures that we previously identified. However, some participants’ annual progress reports did not identify the approaches and information sources used to assess activity results for each performance measure, as FAS guidelines require. FAS guidelines require MAP and FMD participants to submit, within 6 months after the program year ends, annual country progress reports identifying market challenges, describing activities over the past year, and stating measureable goals and results of their performance. These reports enable the participants and FAS to assess the participants’ progress in achieving their stated goals each year. In addition, FAS considers participants’ progress reports when reviewing their MAP and FMD funding applications for subsequent years. FAS guidelines require, among other things, that MAP and FMD participants’ annual progress reports contain the following elements to demonstrate how their market development activities are relevant and their impact is measured. The reports should identify “constraints”—that is, obstacles to achieving stated objectives—and “opportunities,” which participants can utilize to achieve their objectives in the markets where they operate. The reports should also provide the performance measures that will be used to assess each activity’s impact on these constraints and opportunities. (See the text box for an example, from FAS guidelines, of a constraint and its related performance measures.) Further, the reports should show, for each performance measure, an associated baseline measure, a stated goal for the given year, and a result. Finally, the reports should identify the methodology that will be used to assess progress toward the goal associated with each performance measure. FAS Example of a Constraint and Associated Performance Measures for a Hypothetical Seafood Group Constraint: new products for , and their availability and characteristics…are not known by the three major retailers. Also, are not aware of their potential consumer interest in these species and how they can increase their profits by introducing them. Performance measures associated with the constraint: Number of retailers carrying targeted regional U.S. products on a Number of new products sampled by targeted retailers Number of products carried on a regular basis by targeted retailers FAS staff in Washington, D.C., and at applicable overseas posts review participants’ annual progress reports as part of the annual application review, according to FAS officials and participant representatives. FAS staff provide feedback to participants about their reports both informally, through e-mail and telephone, and formally, through feedback letters. For example, one feedback letter from FAS that we reviewed instructed the participant to express its objectives more concisely and to develop performance measures that track the desired outcome rather than the participant’s activities. FAS officials noted that their reviews of funding applications consider whether participants adjusted their market development strategies on the basis of results they reported for the previous year. Two Agricultural Trade Officers told us that, in addition to reviewing the reports, they have provided participants support and feedback regarding the identification of constraints and opportunities and development of performance measures. FAS also provides training to help participants identify constraints and opportunities and develop performance measures that meet FAS’s requirements. According to FAS officials, biannual conferences of program participants generally include workshops on program evaluation, which in the past have emphasized developing meaningful performance measures. One of the Agricultural Trade Officers whom we interviewed reported having conducted a workshop that reviewed the UES process and discussed key definitions and criteria for identifying constraints and opportunities and for developing performance measures. The country progress reports that we reviewed generally complied with criteria based on selected FAS guidelines for preparing progress reports and key attributes of successful performance measurement that we had previously identified. In general, the 56 reports by MAP and FMD participants that we reviewed met five of six criteria we used for our analysis. However, 149 of the 373 performance measures in the sampled reports (40 percent) did not identify the methodologies used to assess results, as FAS guidelines require. Following are details of our analysis of the performance measures in the progress reports we reviewed, using these six criteria. 1. Constraint or opportunity has at least one outcome measure. For each constraint or opportunity shown in a progress report, FAS guidelines require that at least one performance measure be outcome oriented rather than output oriented. FAS describes an outcome as showing changed behavior, with an emphasis on what was achieved and how participant activities have affected attitudes and consumer habits in the targeted market. In contrast, FAS defines an output as showing what was done at the activity level (e.g., two seminars conducted, newsletter sent to 1,000 addressees). The progress reports that we reviewed used both outcome and output measures to determine the impact of activities and to address the identified constraints and opportunities. At least one outcome measure was associated with 105 of the 115 constraints and opportunities in the sample (91 percent), and outcome measures constituted 260 of the 378 performance measures (69 percent). 2. Performance measure is clear. We assessed the clarity of the performance measures. Specifically, we assessed whether the measure’s name and definition were clearly stated and consistent with the numerical goal used to calculate it—a key attribute for successful performance measures that we previously identified. We found that 356 of the 378 performance measures (94 percent) in the progress reports that we reviewed met this criterion. 3. Performance measure is aligned with related constraint or opportunity. To ensure alignment of performance measures with the constraints or opportunities they address, FAS guidelines state that each measure must directly affect the related constraint or opportunity, must reflect the scope of activity and progress in the market, and must be within the ability of the participants to influence. In the progress reports that we reviewed, 330 of the 378 performance measures (87 percent) were aligned with the related constraint or opportunity, and 110 of the 118 constraints and opportunities had at least one aligned performance measure associated with it. However, 58 (13 percent) of the performance measures were not aligned with a constraint or opportunity, indicating a risk that those participants might measure incorrectly, or fail to measure, the impact of their activities. 4. Performance measure is quantifiable. FAS guidelines require that each performance measure be quantifiable. All 378 (100 percent) of the measures in the sample of progress reports we reviewed were quantifiable, with numerical values. When a goal is measurable, FAS is better able to assess whether the participant’s performance is meeting expectations. 5. Performance measure has associated baselines. FAS guidelines state that each performance measure should have an associated baseline. We found that 359 of the 375 measures in our sample (96 percent) had associated baselines, indicating that they were based on an initial market review and that the performance measures were consistent from year to year. However, we also found that the baselines did not appear to inform the goals for subsequent years. For example, one participant had a baseline of 105 buyer/seller introductions but set a goal of 35 for the following year. The result for that year—164—not only exceeded the baseline but also exceeded the goal by more than 468 percent, calling into question whether the baseline was appropriate for the performance measure. 6. Performance measure has an identified methodology. FAS guidelines for reviewing country progress reports state that the reports must identify the methodologies used to assess results for each performance measure. The reports that we reviewed identified a methodology—that is, an information source, an approach for assessing results, or both—for 224 of the 373 performance measures (60 percent). For example, one progress report identified “esults gathered from consumer surveys during in-store promotions” as the information source and the approach used to assess results of activities intended to increase consumer awareness. Another report identified the information source and the approach as “2009 results based on 334 informal customer surveys conducted throughout the year” and explained how certain results were averaged to provide aggregated numbers. For the 149 performance measures with no identified methodology (40 percent), it would be difficult for FAS to determine the reliability of the reported results. Table 6 summarizes the results of our analysis of the sample of country progress reports that we reviewed. In addition, a comparison of participants’ measurable goals and reported results in the progress reports that we reviewed showed that those participants met or exceeded a combined total of 222 of 357 (62 percent) of their goals. However, the extent to which participants met the goals varied widely; some participants exceeded a goal by more than 1,000 percent, while others attained less than 10 percent of the goal. FAS guidance requires that participants monitor their progress relative to their stated goals but has not established requirements for whether, when, or how participants should meet their goals. According to FAS officials, narratives in the progress reports should address whether and why actual results did or did not meet goals and what changes are needed to address any disparities. FAS officials noted that if an FAS marketing specialist reviewing a funding application notices wide discrepancies between the participant’s goals and results for the previous year, the specialist will collaborate with the participant to identify lessons that can be learned and will look for corresponding changes in the participant’s strategy for the coming year. FAS requires that program participants conduct evaluations of their program activities when appropriate or required by FAS. Current MAP regulation defines a program evaluation as a review of the participant’s entire program or an appropriate portion of the program as agreed to by the participant and FAS. These reviews can range from external, third- party evaluations, such as cost-benefit analyses, to participants’ internal reevaluations of their approaches to market development activities. FAS officials reported that they received a combined total of 71 third-party program evaluations from 43 participants in 2010 and 2011. Additionally, eight of 10 U.S. agricultural export promotion groups surveyed by an industry contractor reported that they conducted country, regional, or global evaluations during the last 3 years. Because the program evaluations are conducted on a case-by-case basis and may cover only a portion of a participant’s market development activities (e.g., market development efforts in 1 of 20 countries where a participant conducts its activities), it is difficult to determine what portion of all market development efforts are assessed through these evaluations. One FAS contractor who had previously conducted third- party evaluations for MAP and FMD participants told us that factors such as the size of the participants and the value the participant places on monitoring and evaluation affected the frequency, depth, and usefulness of evaluations that his firm had conducted. A 2007 cost-benefit analysis of MAP and FMD, commissioned by FAS, found that the programs increased U.S. agricultural exports and benefitted the U.S. economy. Overall, the study asserted that the government’s expenditures for the two programs had resulted in greater increases in U.S. agricultural exports and greater benefit to the U.S. economy than would have occurred without the expenditures. However, the study’s two econometric models estimating the programs’ benefits have methodological limitations that may affect the accuracy of the estimates. First, the model used to estimate changes in market share omitted important variables, and, second, a sensitivity analysis of key assumptions was not conducted for that and another model that the study used. FAS officials reported that they plan to commission a new cost- benefit analysis in 2014 but indicated that they have not yet identified the methodologies that the new analysis will use. The 2007 cost-benefit analysis, conducted by Global Insight, Inc., found that MAP and FMD had positive effects on agricultural export activities. The study also asserted that without public-sector funding, the private sector would under invest in agricultural market development, negatively affecting the U.S. economy—an outcome known as market failure. The study used data from fiscal years 2002 through 2006 to estimate the economic effects of FAS’s program expenditures under the 2002 farm bill and of FAS’s possible expenditures under a hypothetical 2007 farm bill. Following are key estimates from the 2007 study. The study estimated that the increased market promotion and development funding authorized for MAP and FMD in the 2002 farm bill—almost doubling from roughly $125 million in fiscal year 2001, before the bill’s enactment, to approximately $234 million in fiscal year 2006—raised the U.S. share of global agricultural exports from 18 percent to 19 percent, equivalent to a $3.8 billion increase in trade. The study estimated that as a result, economic welfare increased by $828 million. The study estimated that if annual MAP and FMD spending under the hypothetical 2007 farm bill in fiscal years 2007 through 2015 were equivalent to spending under the 2002 farm bill in fiscal year 2006, the U.S. share of global agricultural exports would rise from 19 percent in 2007 to 20.9 percent in 2015—equal to $84 billion in U.S. exports in 2015. If spending under the hypothetical 2007 farm bill increased by 50 percent over the 2006 level, U.S. exports would increase to $86.4 billion in 2015 and economic welfare would increase by $740 million. On the other hand, the study suggested that if the hypothetical bill did not authorize funding for the two programs, U.S. exports would grow to $75.5 billion by 2015 and economic welfare would decrease by $1.1 billion. The study found that market development promotions for certain U.S. high-value commodities have a positive effect—known as a spillover effect—on exports of other U.S. high-value commodities. The study estimated that every dollar spent for agricultural market development under the 2002 farm bill increased economic welfare by $5.20; under the hypothetical 2007 bill, every dollar would increase economic welfare by $4.10. In contrast, eliminating the funding would reduce economic welfare by $4.30 per eliminated dollar, resulting in a $1.1 billion loss to the U.S. economy. Two models that Global Insight used to estimate the effects of MAP and FMD on the U.S. economy have methodological limitations that may affect the models’ ability to accurately estimate the programs’ benefits to the U.S. economy. As with any study using economic models, the lack of data forces researchers to make certain assumptions, and the resulting estimates are affected by the methodologies chosen and the assumptions used. In general, the 2007 study assumes that FAS program expenditures lead to an increase in private-sector expenditures. To estimate the economic effects of the program assistance, the 2007 study employed two economic models commonly used in cost-benefit analysis: (1) A market share model to estimate the effect of expenditures under the 2002 farm bill and the hypothetical 2007 farm bill on the U.S. agricultural market share of global markets and (2) a spillover effect model to estimate increases in U.S. agricultural exports due to promotions of other U.S. exported commodities. However, these models have limitations that may affect their ability to accurately estimate the economic benefits of MAP and FMD. FAS officials reported that they plan to commission a new cost-benefit analysis in fiscal year 2014, but they indicated that they have not yet identified the methodologies that the new study will use. To examine the effect of the 2002 farm bill and the potential effect of the hypothetical 2007 farm bill, the 2007 study used a U.S. market share model to simulate the market share for U.S. high-value and bulk commodities in global markets from 1975 through 2004. However, the model has limitations related to its exclusion of important variables and its lack of a sensitivity analysis of key assumptions. The 2007 study’s use of the market share model controlled for four variables across each year: (1) the U.S. market share in the previous year, (2) the currency exchange rate, (3) combined FAS program expenditures and participants’ contributions over time, and (4) a time trend to account for any omitted variables. However, the model excludes some variables that could be important for determining the U.S. market share—in particular, industry-specific variables such as commodity prices, production volumes, and number of export competitors. Although the study states that a linear trend variable is included as a proxy for missing variables in the model, this variable cannot be expected to capture the full effects of such industry-specific variables. By limiting the model to the four variables, the study may bias the effect of these variables by incorrectly identifying the magnitude of these variables and the statistical significance of their effect on U.S. market shares. The 2007 study used the market share model to examine the possible effects of the hypothetical 2007 farm bill under three scenarios. 1. The first scenario assumed that FAS program expenditures and participant contributions would remain constant. On the basis of these assumptions, the study predicted that U.S. exports would increase from $65 billion in 2006 to $84 billion in 2015. 2. The second scenario assumed that FAS would increase program expenditures and that participants would increase their contributions gradually, spending 50 percent more by 2012 than in 2007. On the basis of these assumptions, the study predicted that U.S. exports would increase from $65 billion in 2006 to about $86 billion in 2015. 3. The third scenario assumed that FAS would immediately eliminate program expenditures in 2008 and that, as a result, participants would spend less of their own resources on market development, gradually decreasing their spending by 50 percent by 2012 compared with 2007. On the basis of these assumptions, the study predicted that U.S. exports would increase from $65 billion in 2006 to $75.5 billion in 2015. Following Office of Management and Budget guidelines for conducting a cost-benefit analysis, the study included a sensitivity analysis of the market share model’s predictions, assessing the level of confidence in the predictions with a 95 percent confidence interval. However, the study did not include a sensitivity analysis of the third scenario’s assumption regarding participants’ response to the elimination of FAS funding. In particular, the study did not examine the effects that a range of participants’ responses to the elimination of FAS funding would have on the U.S. market share. That is, the study did not consider whether participants’ market development spending would remain constant, would decrease at lower rates than the 50 percent that the study assumed, or would increase to the level of the eliminated FAS expenditures. Best practices for cost estimation dictate the inclusion of a sensitivity analysis to ascertain the effect of the assumption on the results. For a sensitivity analysis to reveal the effect of a changed assumption on a cost estimate, the analysis must examine the effect of changing one assumption while holding all other assumptions and variables constant. In addition, the study did not provide any insight or data to support the assumption that participants would reduce their spending if FAS program funding were eliminated. The 2007 study used a spillover effect model to test the assumption that increasing the market promotion of one U.S. commodity has a positive effect on exports of other U.S. commodities. The study found that the effects of the relationships between commodity promotions and exports ranged from positive to negative and varied in magnitude but that, overall, the positive effects outweighed the negative effects. The model examined the relationship between U.S. market promotions and exports for four high-value products—almonds, apples, grapes, and wine—for the period 1985 through 2004. For example, increased U.S. promotion of almonds led to increased U.S. exports of grapes but to decreased exports of wine and had no effect on apple exports. Conversely, increased U.S. promotion of grapes led to decreased U.S. exports of almonds but to increased exports of apples and wine. Although the study estimated the size of the spillover effect, it did not include a sensitivity analysis of a key assumption used for this estimate. The study assumed that some type of market development as a result of U.S. market promotions occurs in 64 percent of all markets for U.S. exports. To estimate the spillover effect of FAS market promotions, the model used this assumption, unsupported by data or industry evidence, as well as the estimated effects of promotions of one commodity on the exported quantities of other commodities. According to the study, the spillover effect of FAS market promotions ranges from 24 percent to 54 percent of the total growth in overall market development. However, the study did not include a sensitivity analysis of the effect of changing the assumption that development occurs in 64 percent of markets as a result of U.S. market promotions. That is, the study did not examine the extent to which assuming a higher or lower percentage of market development would change the magnitude of the estimated spillover effect. For many years, MAP and FMD—the two programs that receive most of USDA’s market development funding—have provided continuing assistance to an established pool of agricultural trade associations, primarily to promote generic commodities overseas. FAS has developed a performance monitoring framework in which FMD and MAP participants are expected to develop measurable objectives—that is, constraints and opportunities—linked to performance measures that allow them to annually compare their results with established baselines and goals. Participants generally followed this framework successfully; however, many of the participants’ annual country progress reports that we reviewed did not identify, as FAS guidelines require, the methodologies used to assess results for each performance measure. These gaps limit FAS’s ability to determine the reliability of program results reported by participants and to accurately assess participants’ progress and success in achieving program objectives. The 2007 cost-benefit analysis that FAS commissioned asserted that MAP and FMD have increased U.S. exports and benefited the U.S. economy. However, one econometric model that the study used to estimate the programs’ effects excluded variables that have significant impact on U.S. market shares. As a result, the model may bias the estimates of the variables that it included. In addition, because another model that the study used did not include a sensitivity analysis of certain assumptions, it is not possible to determine the degree to which those assumptions would affect the model’s results. For example, one scenario assumed that if FAS suddenly eliminated all MAP and FMD expenditures, participants would reduce their own spending on market development by 50 percent. However, the study does not examine the effects of participants’ other possible responses to the elimination of FAS expenditures, such as maintaining their spending or increasing it to compensate for the eliminated FAS funds. Accurate cost benefit analyses help decision makers determine how best to allocate program funding and provide a better picture of the potential effect on U.S. exports and the economy if funding is increased or decreased. We recommend that the Secretary of Agriculture direct FAS to take the following three actions: To improve MAP and FMD participants’ annual reporting of the results of their market development activities, use appropriate means to emphasize the importance of participants’ identifying the methodologies used to assess results for each performance measure in their annual country progress reports. To improve the accuracy of future cost-benefit analyses of FAS’s market development programs, ensure that any econometric model used for the cost-benefit analysis, such as the market share model, includes industry- specific variables that could have a significant role in determining the U.S. market share—for example, commodity prices, production volumes, and number of export competitors; and conduct a sensitivity analysis, in accordance with best practices for cost estimates, of the key assumptions that are applied in any economic models used in the cost-benefit analysis, such as the market share model and spillover effect model. USDA provided written comments about a draft version of this report, concurring with our findings and recommendations (see app. VI for a copy of these comments). USDA also provided technical comments, which we incorporated as appropriate. As agreed with your office, we plan no further distribution until 30 days from the report date. At that time, we will send a copy to USDA. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4802 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made significant contributions to this report are listed in appendix VII. We were asked to review several aspects of the U.S. Department of Agriculture’s (USDA) five market development programs, which USDA’s Foreign Agriculture Service (FAS) administers. This report (1) describes participation and expenditures in these market development programs, particularly the Market Access Program (MAP) and Foreign Market Development Program (FMD); (2) examines FAS’s management and monitoring of its market development programs; and (3) assesses FAS’s cost-benefit analysis of MAP’s and FMD’s impact on the U.S. economy. Because MAP and FMD receive most of USDA’s market development funding, we focused our review primarily on program participation in those two programs. For our first and second objectives, we selected five program participants as case studies: the American Hardwood Export Council, the California Table Grape Commission, the Wine Institute, Cotton Council International, and the Western United States Agricultural Trade Association. To select the four commodity group participants, we examined market development program expenditure data for 2002 through 2011. We chose participants that used more than one market development program and had spent a significant amount of their market development funds in the two countries—Japan and Mexico—that we selected for our review. The four groups consisted of at least one bulk commodity group, one nonfood commodity group, and one high-value commodity group, and one horticultural group. All four were MAP participants, and two were also FMD participants. In addition, we included one of four state regional trade groups in our sample, because these groups allocate the majority of MAP funds to small businesses for branded product promotion. We reviewed additional documents, including agreement letters, strategic plans, country progress reports, program evaluations, and other information provided by FAS and the participants. We also interviewed U.S.-based headquarters staff from each of the five organizations. Additionally, we conducted fieldwork in Japan and Mexico, interviewing FAS staff in the Agricultural Trade Offices in Tokyo, Osaka, and Mexico City, as well as representatives of program participants in each country. We also observed several trade promotion activities and visited retailers where U.S. products were sold. We selected Japan and Mexico because they are in different geographic regions and are among the countries where program participants have spent the largest shares of USDA market development funds. In addition, for all of three of our objectives, we interviewed FAS staff in headquarters, contractors that FAS uses for aspects of its market promotion programs, and subject matter experts in the field of trade economics. We also reviewed relevant laws, regulations, and FAS guidelines. To describe agricultural groups’ participation in FAS’s five market development programs and the programs’ expenditures from 2002 through 2011—our first objective—we reviewed program participants’ applications, country progress reports, and program evaluations to identify examples of the activities that participants undertook with market development funding. We also analyzed expenditure data for the five programs from 2002 through 2011 to understand the nature of program participation and to identify program participants with the largest expenditures as well as changes in participants’ expenditures. We reviewed MAP and FMD expenditure data by country to determine where participants spent the largest amounts of program funding. Further, we compared participants’ matching contributions with their expenditure levels to determine whether participants were meeting program cost- sharing requirements. In addition, we reviewed expenditure data for the MAP branded products program for 2002 through 2011 to determine the scope of the branded products program, including the number of small businesses participating and the number affected by the 5-year graduation requirement. To assess the reliability of market development program expenditure and contribution data that FAS provided, we conducted electronic and manual data testing and held interviews with knowledgeable USDA staff members. On the basis of our assessment of the data and our interviews with the staff members, the data appear to be reliable for the purposes of this report. To examine FAS’s management and monitoring of the market development programs—part of our second objective—we discussed management practices and the use of the Unified Export Strategy (UES) system, which participants use to apply for multiple programs to reduce risks of overlap and duplication among the five programs with FAS officials. We also met with FAS’s Compliance Review Branch to review FAS’s process for verifying participants’ expenditures and contributions for all programs in which they participated. In addition, to verify that small businesses participating in the MAP branded products program did not receive MAP funds from more than one MAP participant for promotion in the same country, we reviewed expenditure data for the MAP branded products program for 2011. We examined all businesses that had spent MAP funds, the countries where they spent the funds, and the MAP participants that allocated these funds to the businesses through the branded products program. We identified, and reviewed with FAS, any instances in which a business may have spent in a single country funds received from two MAP participants. To determine whether MAP and FMD participants were assessing results in accordance with FAS performance monitoring guidelines—also part of our second objective—we developed an assessment tool to analyze a sample of participants’ annual country progress reports. We selected a random but nongeneralizable sample of 20 participants in MAP and FMD, and we identified countries where these participants spent more than $5,000 in 3 consecutive fiscal years, 2008 through 2010. We requested the country progress reports for all 20 participants for each of the 3 years—a total of 60 progress reports. After requesting the 60 reports, we removed four groups on being informed that those groups use other forms of reporting; we also removed two state regional trade groups. After we requested additional randomly selected progress reports, our final sample totaled 56 reports. Where progress reports covered a region rather than a specific country, we used regional data and country-specific data as available. We selected criteria, based on FAS guidelines for developing the progress reports and key attributes of successful performance measurement that we previously identified, to assess constraints and performance measures in the reports that we reviewed. These criteria are as follows: (1) each constraint has at least one outcome measure; (2) the performance measure is clear; (3) the performance measure is aligned with the related constraint or opportunity; (4) the performance measure is quantifiable; (5) the performance measure has an associated baseline; and (6) the performance measure has an identified methodology. We also compared the goals and results reported for each performance measure to determine the extent to which the goals were met and the results were reported. We recorded each constraint and performance measure from the country progress reports we reviewed, and two reviewers coded separate analyses for each criterion. The two analyses were then reconciled to produce a final result. In addition, we requested from FAS all third-party program evaluations associated with our random sample of participants and countries in the 3- year timeframe. FAS informed us that the evaluations were too difficult to identify using these parameters and provided a list of 71 evaluations that 43 participants, including 13 of those in our sample, submitted in 2010 and 2011. We did not assess the quality of the evaluations, because such an assessment was beyond the scope of this engagement. To assess FAS’s cost-benefit analysis of MAP’s and FMD’s impact on the U.S. economy—our third objective—we analyzed studies of MAP and FMD commissioned by FAS and published in 2007 and 2010, respectively, by Global Insight, Inc. We conducted structured interviews with the studies’ authors, agency officials, and academics involved in the studies. We also reviewed relevant research on market development programs. In addition, we reviewed Office of Management and Budget guidelines for conducting cost-benefit analyses and interviewed office officials. We evaluated the studies on the basis of GAO’s cost estimation guide, prior related GAO work, and internal expertise. We conducted this performance audit from August 2012 to July 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Many organizations participate in more than one of the five U.S. Department of Agriculture (USDA) market development programs. FAS allocates the majority of USDA’s market development funding for the Market Access Program (MAP), which has the largest number of participants. Table 7 shows the 103 program participants and their award amounts in fiscal year 2012, in descending order of total award amounts. The table does not show small businesses that received a share of MAP funding indirectly for branded product promotion. Tables 8 through 10 show USDA market development program participants that spent the largest amounts of funds provided by the Emerging Markets Program (EMP), Quality Samples Program (QSP), and Technical Assistance for Specialty Crops Program (TASC) funds in 2011. Table 11 shows the countries where the largest amounts of funding for the three programs were spent in 2011. The 2007 study and 2010 update contended that three market failures lead private firms to underinvest in export promotion compared with the socially optimal level. According to the study, these failures therefore justify U.S. government intervention through the U.S. Department of Agriculture market development programs. Uncertain funding. Because of uncertainty about annual U.S. government allocations of market development funding, private-sector participants tend to develop short-term (i.e., 1 year) plans that do not take into account the long-term effects of market development. For example, market development expenditures for high-value and bulk commodities have a lagged impact of 7 and 3 years, respectively, so that expenditures in a single year accrue benefits over several years. As a result, private-sector participants tend to underfund market development activities relative to the socially optimal level. Spillover effect. Market development for one commodity may also increase demand for other commodities—a result known as a spillover effect. For example, almond promotions increase grape exports (but not vice versa). Unless such commodities are “co- branded” and marketed together, exporters do not see the spillover effect as a promotion incentive and thus tend to underpromote their own products compared with socially optimal levels. Indirect effect. Related to the first two sources of market failure, less than optimal amounts of promotion, and therefore of exports, will lead—in what is known as the indirect effect—to less than socially optimal operating levels in other segments of the farm economy and the general economy. To the extent that exports benefit other sectors of the general economy, such as by increasing growers’ prices and government tax revenues, there is a compelling public interest in helping firms to develop new export markets for U.S. agricultural commodities. The 2007 cost-benefit analysis that the Foreign Agricultural Service (FAS) commissioned used a computable general equilibrium model, in addition to a market share model and a spillover effect model, to examine the economic impacts of FAS’s Market Access Program and the Foreign Market Development Program. A computable general equilibrium model is a mathematical expression where all economic relationships are modeled simultaneously. For example, the price of a good depends on the price of all other input goods, profits, and wages, and vice versa, assuming full employment in the economy. Compared with the market share or spillover effect model, the computable general equilibrium model includes a more comprehensive list of relevant variables while allowing more parameters to vary. Using this model, the 2007 study found the following key results: The FAS program and participant promotion expenditures under the 2002 farm bill present an economic-welfare-to-government- expense ratio of 10.3:1 and an economic-welfare-to-total-expense ratio of 5.2:1. This result translates into an increase in farm cash receipts of $2.2 billion. The FAS program and participant promotion expenditures under a hypothetical 2007 farm bill presented a potential economic- welfare-to-government-expense ratio of 8.2:1 and an economic- welfare-to-total-expense ratio of 4.1:1, with a total economic benefit of $740 million. In addition, for the period 2008-2012, farm revenues would equal $256 billion under the hypothetical 2007 farm bill and would change by $2.4 billion and -$4.2 billion under the increasing and eliminating scenarios, respectively. In addition to the individual named above, Christine Broderick (Assistant Director), Pedro Almoguera, Mason Thorpe Calhoun, Howard Cott, Kathryn Crosby, Martin De Alteriis, Justin Fisher, Reid Lowe, Grace Lui, and Vanessa Taylor made significant contributions to this report.
USDA administers five programs to assist U.S. agricultural industry efforts to build, maintain, and expand overseas markets. However, members of Congress continue to debate the level of funding for this assistance and its impact on agricultural exports. USDA provides about $250 million annually for the five market development programs. MAP and FMD received about 90 percent of this funding in fiscal year 2012, with allocations of $200 million and $34.5 million, respectively. GAO was asked to review USDA's market development programs. This report (1) describes participation and expenditures in these market development programs, particularly MAP and FMD; (2) examines FAS's management and monitoring of its market development programs; and (3) assesses FAS's cost-benefit analysis of MAP's and FMD's impact on the U.S. economy. GAO analyzed USDA expenditure data from 2002 through 2011 and reviewed key agency and program participant documents. GAO also assessed a sample of participants' annual progress reports and assessed economic cost-benefit analyses of MAP and FMD commissioned by USDA. Market development program participants use program funds to support a variety of activities intended to raise awareness or acceptance of U.S. agricultural products in overseas markets. Common activities include, among others, market research, consumer and retail promotion, and participation in international trade shows. GAO's analysis of expenditure data from 2007 through 2011 shows that participants in the Market Access Program (MAP) and the Foreign Market Development Program (FMD)--the largest of the five market development programs--remained generally consistent during that period. The program participants with the largest shares of funding and the countries where the largest shares of funds were spent also remained relatively consistent. Expenditure data for 2011 show that MAP and FMD participants met or exceeded FAS contribution requirements that they match minimum percentages of the program funding they receive. Unlike funding for the other programs, a portion of MAP funds is used for promotion of branded products. In 2011, MAP participants spent about 85 percent of program funding on overseas promotion of generic commodities. More than 600 small companies and seven agricultural cooperatives spent the remaining 15 percent of MAP funding to promote branded products. The U.S. Department of Agriculture's (USDA) Foreign Agricultural Service (FAS) uses several management and monitoring processes to reduce the risk of duplication among the five programs. FAS uses an integrated system to process funding applications for multiple programs and to monitor expenditures, which reduces the risk of duplication. According to FAS officials, FAS also monitors participants' expenses for all programs through its compliance review process. In addition, FAS guidance requires program participants to submit annual progress reports on the results of their market development activities. GAO found that performance measures in a sample of progress reports generally reflected selected FAS guidance and key attributes of successful performance measures that GAO had identified. However, the sampled reports did not always outline the methodologies used to assess activity results as required by FAS guidelines. In these cases, it would be difficult for FAS to determine the reliability of the reported results and the impact of market development activities. A 2007 cost-benefit analysis of MAP and FMD, commissioned by FAS, found that the programs increased U.S. agricultural exports and benefited the U.S. economy, but methodological limitations may affect the magnitude of the estimated benefits. Overall, the analysis asserted that the government's expenditures for the two programs resulted in greater increases in U.S. agricultural exports and greater benefit to the U.S. economy than would have occurred without the expenditures. However, an economic model used to estimate the programs' impact on U.S. market share omitted important variables, such as commodity prices. Also, the study did not include sensitivity analyses of certain key assumptions underlying its estimates of impacts on U.S. exports. For example, analyses of the possible effects of varying levels of program funding would provide a clearer picture of the potential impact of increased or decreased funding on U.S. exports and the economy. FAS officials reported that they plan to commission a new cost-benefit analysis in 2014 but have not yet identified the methodologies that the new analysis will use. GAO recommends that USDA (1) emphasize that market development program participants' annual progress reports should identify the methodologies used to assess results and (2) ensure that any economic models used in future cost-benefit analyses of the programs include industry-specific variables and sensitivity analyses of key assumptions. USDA concurred with GAO's recommendations.
Federal environmental policy is shaped by numerous federal statutes, including The Clean Air Act, The Clean Water Act, and The Resource Conservation and Recovery Act. These laws charge EPA with protecting the environment through such activities as setting standards for air and water quality, issuing permits, and taking enforcement actions. The laws also allow states to assume many of these responsibilities. As states’ responsibilities have grown, they have applied for and received the lead role in performing these activities. Consequently, the operational responsibility for most of EPA’s major programs currently lies with the states, and EPA routinely relies on states to implement the full range of environmental responsibilities associated with these programs. In recent years, a number of organizations have emphasized the need to supplement or significantly modify the existing prescriptive, command- and-control approach toward environmental protection established under current federal laws. For example, in 1998, Resources for the Future (an environmental policy research organization) noted that while the current federal approach has many noteworthy achievements, it is also flawed in several respects. It noted in particular that federal laws and regulations tend to prescribe the specific means by which environmental goals will be reached, rather than establishing goals and allowing states and facilities the flexibility to reach those goals. GAO has also reported on these matters in recent years, focusing in particular on EPA’s efforts to “reinvent” environmental regulation. EPA has also recognized the need for new approaches in numerous publications and in its interactions with state governments and other parties. The Congress has recently considered giving EPA explicit authority to allow more flexible approaches by states and others. One such proposal, the Second Generation of Environmental Improvement Act of 1999 (HR 3448), introduced in the 106th Congress, would have allowed EPA to enter into innovative strategy agreements with states, companies, or other interested parties in order to experiment with ways to achieve environmental standards more efficiently and effectively. Such agreements could have involved the modification or waiver of existing agency regulations. The bill was not enacted and has thus far not been reintroduced in the 107th Congress. In recent years, states have worked with EPA through several key avenues to pursue innovative environmental approaches. Seven of the 15 states we contacted have used EPA’s Project XL as such a vehicle, even though the projects in which they are involved were formally proposed to EPA by a private company. Partly as a result of states’ dissatisfaction with Project XL, however, EPA and the Environmental Council of the States (ECOS) agreed in 1998 to a process in which, among other things, states submit innovative projects through their respective EPA regional offices and EPA is provided timelines within which it must respond. In addition to these two major avenues, states have also pursued alternative approaches to environmental protection through the use of the National Environmental Performance Partnership System (NEPPS), by participating in programs developed through EPA’s media offices and by negotiating relatively narrow changes in their day-to-day working relationship with EPA. Project XL, which stands for “excellence” and “leadership,” was launched in 1995 as part of the previous administration’s broad effort to reinvent federal environmental protection policy. Based on recognition of the need for new approaches to environmental regulation, Project XL was designed to allow private businesses, as well as states and local governments, to test innovative ideas to enhance environmental protection. In exchange for improved performance, participants would be given the flexibility to explore new approaches to environmental protection. To participate in Project XL, businesses, states, and other government agencies submit proposals to EPA, which then evaluates proposals according to specific criteria and other considerations. EPA requires that, among other things, Project XL participants demonstrate that their proposals will result in “superior environmental performance,” and include a system for monitoring and a process for stakeholder involvement. XL projects should also be designed to test innovative approaches that are transferable to other facilities. Although most of the more than 50 XL projects approved to date were submitted by private facilities, some federal and local government agencies have submitted proposals as well. In addition, four states have submitted proposals designed to apply to multiple facilities within the states. Massachusetts’ Environmental Results Program, for example, covers the dry cleaning, photo processing, and printing sectors. Table 1 describes each of the state-initiated projects that cover multiple facilities or entire industry sectors. While not initiating specific Project XL proposals, 7 of the 15 states we contacted have participated by working on initiatives that were formally proposed to EPA under Project XL by private companies. For example, even before the establishment of Project XL, the Minnesota Pollution Control Agency had been working with the 3M Company to develop alternative compliance approaches, which it subsequently pursued under the auspices of Project XL. More recently, Minnesota has actively worked with the Andersen Windows Corporation on a proposal to reduce air emissions from a facility in Bayport, Minnesota, in exchange for regulatory flexibility. Similarly, Virginia played an active role in advocating an innovative approach to controlling air emissions proposed by Merck Pharmaceuticals for their facility in Stonewall, Virginia. In 1998, EPA and ECOS agreed to encourage experimentation by states with new approaches to environmental protection through their Joint EPA/State Agreement to Pursue Regulatory Innovation. In part, this agreement grew out of the states’ frustration with other avenues for pursuing innovation, such as Project XL. Specifically, states were frustrated with Project XL’s requirement that sponsors document a proposal’s ability to achieve “superior environmental performance.”Many believed that such a requirement was too stringent and precluded worthwhile projects that would deliver environmental results equivalent to existing regulations but more efficiently. States also believed that the process of submitting a Project XL proposal and receiving EPA’s approval was too time-consuming. In response to these concerns, the ECOS/EPA agreement outlined a process by which states could submit innovative projects through the EPA regional offices and provided timelines during which EPA must provide a response. Specifically, once a state submits a proposal to EPA, the agency has 4 weeks to reply to the state with a list of questions and concerns. Within 90 days of receipt of the initial proposal, EPA must issue a final response to the state. According to the EPA regional officials we interviewed, states do not often hold EPA strictly to these deadlines. Nonetheless, state officials told us that the time limit is sometimes helpful in obtaining a timely EPA response when necessary. In addition, the agreement omits Project XL’s requirement for “superior environmental performance.” Instead, it only requires that innovations seek more efficient and/or effective ways of protecting the environment. The agreement also lays out a set of principles intended to guide the development and implementation of innovations. Specifically, it states that (1) innovation often involves experimentation that should not harm human health or the environment but may include some chance of failure; (2) innovations must seek more efficient or effective ways of meeting environmental performance goals; (3) innovations should seek creative ways to tackle environmental problems; (4) stakeholders should be involved in the development and evaluation of innovations; (5) results of innovations must be measured and analyzed; (6) innovations must be enforceable and accountable; and (7) states and EPA must work as partners to promote innovation. State proposals submitted to EPA to date have covered a wide range of innovations. Some agreements have targeted one specific problem at an individual facility, while others have been designed to affect a large number of stakeholders or to develop a framework through which a state and EPA agree to handle innovative proposals. For example: The New Hampshire Department of Environmental Services sought flexibility under federal regulations for a single pulp and paper mill to test an innovative regulatory approach to pollution control and treatment. Under new regulations, the mill would be required to install expensive technology to control airborne methanol emissions. Under the proposal, however, the mill would use an alternative technology that would result in a four-fold reduction in methanol emissions over the current requirements while saving the company approximately $825,000. In contrast, a proposal by Michigan’s Department of Environmental Quality covered a much larger group of stakeholders. The proposal seeks approval for a new approach to meeting Total Maximum Daily Load (TMDL) requirements under the Clean Water Act. In particular, it would facilitate ways that point sources of pollution (e.g., an industrial facility discharging from one or more pipes) could collaborate with diffuse, “nonpoint” sources in controlling phosphorus pollution. Wisconsin proposed a broad framework through which the Wisconsin Department of Natural Resources and EPA would deal with multiple innovations. Under the agreement, Wisconsin may develop up to 10 pilot projects with facilities that would test a facility-wide, “multi-media” approach to regulation (i.e., an approach that comprehensively integrates their air, water, and waste regulations) that is built around the use of an environmental management system. Facilities that commit to achieving superior environmental performance would be granted some degree of regulatory flexibility. The number of proposals under the ECOS/EPA agreement has been fairly low to date, although participation has been growing recently. As of February 2001, 3 years after the agreement, 22 proposals had been proposed from six states in three EPA regions. As indicated in figure 1 below, by January 2002, participation had increased to 15 states, which together had proposed 45 initiatives. Of these proposals, EPA has accepted 20, another 22 are still under consideration, and 3 proposals have been denied or withdrawn. In our interviews with selected states, we discussed specific state experiences under the agreement. Of the 15 states, 10 had proposed projects under the ECOS/EPA agreement, while other states indicated that they are considering proposing projects in the future. In addition to Project XL and the ECOS/EPA agreement, state and EPA officials identified several other avenues for negotiation that states have used to obtain EPA’s approval for innovative environmental strategies. One is the National Environmental Performance Partnership System (NEPPS), which was established in 1995 to give states greater flexibility in setting their priorities and in the way they carry out their programs if they demonstrate the capacity and willingness to achieve mutually agreed-upon results. NEPPS provides a framework for the state’s relationship with EPA, laying out the state’s environmental goals and priorities, and the ways in which they will measure progress in meeting these goals. Under the system, a state agency may enter into a Performance Partnership Agreement with its EPA regional office that typically specifies the signatories’ respective roles and responsibilities in achieving specified program objectives. While not intended to focus solely on innovation, some states have used NEPPS for this purpose. As our 1999 report on NEPPS noted, for example, Minnesota’s Pollution Control Agency reorganized its traditional medium- by-medium (i.e., air, water, and waste) structure into a structure the agency believed would more effectively address problems that cross media lines. The agency used its Performance Partnership Agreement to provide the flexibility it needed to report environmental results to EPA in line with this new structure. Other states have also used their partnership agreements to achieve and document agreements on specific initiatives. EPA has also sought to promote innovation through its program offices. For example, the Office of Solid Waste and Emergency Response has promoted cleanup and redevelopment of contaminated industrial sites by encouraging state voluntary cleanup programs. Unlike programs that rely on enforcement alone to achieve cleanups by parties responsible for the contamination, these voluntary “Brownfields” programs allow site owners and developers to collaborate on bringing sites back to productive use. EPA has encouraged the programs by providing funding to develop these programs, reviewing program adequacy, and agreeing not to take further enforcement action at these sites unless serious environmental contamination was overlooked. Finally, EPA regional officials we interviewed mentioned that minor changes are often adopted through informal discussions during the normal course of work. They noted that more significant changes, such as those requiring a change in regulations, would have to go through one of the avenues for innovation or through the rulemaking process. While states can face significant obstacles at the state level before submitting an innovative proposal to EPA, officials in 12 of the 15 states we contacted stated their most significant obstacles are at the federal level. States cited prescriptive regulations as one of the most significant obstacles, along with an EPA culture they viewed as being averse to risk and resistant to change. EPA officials acknowledged that its culture has a tendency to resist innovative proposals, but some noted that such resistance is rooted in the agency’s primary mission to ensure strict adherence to the letter of statutes and agency regulations. They also noted that some states have omitted key elements when they submit proposals, such as provisions to measure whether the innovation to be tested will have its intended effect. Officials in all of the states we contacted indicated that they faced significant obstacles—including lack of resources, cultural resistance in the state agency, and opposition from environmental groups--even in advance of proposing a project to EPA. In some cases, state officials cited these obstacles as reasons why the state had not yet actively pursued innovations requiring federal approval. In discussing 20 separate initiatives, state officials cited a heavy ongoing agency workload and concomitant limited resources as obstacles to innovative approaches in 11 instances. In several instances, the state was nevertheless actively pursuing innovative approaches despite this constraint. For example, a Michigan official stated that finding sufficient resources was one of the primary difficulties faced in pursuing initiatives under the EPA/ECOS agreement. Although a considerable number of additional staff and resources were needed, the effort was given high- priority status; and therefore, agency resources were diverted to support it. Similarly, noting that 80 percent of their resources are consumed in meeting federally mandated requirements, officials from the Minnesota Pollution Control Agency said the agency’s management is reluctant to divert scarce resources to innovative programs. Nonetheless, they said the agency has actively promoted Project XL initiatives and is likely to propose future initiatives under the EPA/ECOS agreement. Officials from other states, however, said they were unable to pursue innovative approaches because of the limited resources available to meet an already-demanding workload. For example, an official of the Nebraska Department of Environmental Quality said that developing an innovative proposal would take a considerable investment in up-front staff time and resources, and the agency’s federally mandated workload exhausts all resources. Largely for this reason, Nebraska has not yet pursued any major innovative initiative requiring EPA approval. Similarly, an official of the Georgia Department of Natural Resources cited the agency’s heavy mandated workload and related budget constraints as one of the two most significant obstacles to pursuing innovative approaches. The importance of limited state agency resources as an obstacle to innovative approaches was also highlighted in an April 2000 ECOS survey.The survey asked state officials to indicate the degree to which each of 12 frequently cited impediments to innovative practices was an obstacle in their case. Six of the 29 responding states said that state agency resource limitations were the single largest obstacle they faced, while officials of 7 states indicated that this was a persistent obstacle that was difficult to address. Among the factors not related to federal policy, this factor ranked as the most significant obstacle in the survey. A state agency’s culture and working environment can also discourage innovative approaches. For 5 of the 20 specific initiatives we discussed, state officials said that an agency’s culture and working environment to some extent discouraged alternative approaches to environmental policy. One state official said that obtaining EPA’s permission to pursue an innovation was an abstract problem because the state agency had not been able to reach the point of submitting a proposal. He explained that internal staff resistance was the biggest problem, noting in particular that many rank-and-file managers had been with the agency for 25 to 30 years and had a professional ethic that emphasized following long-standing approaches to environmental protection. The official recalled that several years ago, the agency had examined alternative approaches to permitting, including an approach that would allow regulated facilities to certify their own compliance, and thus allow the agency to shift resources from permitting activities to enforcement activities. The division managers in the agency almost unanimously opposed this approach, fearing that it would lead to loss of control over regulated entities, a loss of funding for their own programs, and less effective environmental protection. In part because of such resistance, the agency had not recently tested EPA’s receptiveness to an innovative proposal. Opposition to innovative approaches from environmental groups and other stakeholders has also impeded proposals. Officials in several states noted that environmental and community groups generally perceive innovative proposals as opening the door to rollback of environmental standards. A Washington state official noted that the state has a very politically active public, and some environmental and community groups perceive innovative proposals as potentially compromising the goals of environmental statutes. For example, such groups vigorously opposed the state’s proposal to extend discharge permits under the Clean Water Act from 5 to 10 years because they feared the state was backing away from oversight of polluting facilities. A representative of the Texas Natural Resource Conservation Commission made similar comments, but noted that early involvement of such groups can go a long way toward mitigating their opposition. He stated that if the concerns of such groups are taken into account during the design of a proposal, their opposition later in the process is far less likely. State officials identified factors at the federal level, including statutes, regulations, and an EPA culture not conducive to innovation, as more significant obstacles than the factors they encountered at the state level. Specifically, officials in 12 of the 15 states we contacted said that these federal obstacles were more significant in impeding innovation than obstacles faced at the state level (such as the state agency’s culture and workload, and opposition from environmental groups). The three remaining states said these two categories were about equal in their significance. As summarized in table 2, of the federal obstacles we discussed with states, federal regulations and an EPA culture viewed as resistant to innovative approaches ranked as the two most significant obstacles affecting progress among the 20 specific initiatives identified by state officials. Our interviews, however, revealed an important relationship between the two factors. Specifically, while EPA officials acknowledged the agency’s culture can be resistant to innovative proposals, some noted—and some state officials agreed—that what is often construed as “cultural resistance” is sometimes rooted in a sense of obligation among agency officials to ensure that statutes and agency regulations are properly and fully implemented. EPA officials also pointed out that in some cases state proposals lacked key elements when they were submitted, such as provisions for public involvement or a systematic means of measuring whether the innovation would have its intended effect. An extensive literature has documented that both existing environmental statutes and environmental regulations can impede innovation. However, the manner in which the two may have this effect differs, with the more detailed, individual regulations generally having a more direct impact on proposals than the more general statutes that authorize the regulations. The major federal environmental statutes are generally less detailed and specific, in terms of what they require or preclude, than the regulations EPA develops to implement them. There tends to be a hierarchical relationship between statutes and regulations—statutory requirements establish the broad outlines of environmental policy while regulations reflect EPA’s effort to implement the statutes, and hence provide much more specific requirements on how the regulated community is to control pollution. Perhaps for this reason, the state officials we interviewed cited comparatively few instances in which an environmental statute precluded a particular innovation they were pursuing. Overall, environmental statutes were ranked either first or second 6 times among the 20 state innovations we examined. However, environmental statutes have been linked with a broader, less direct impact on state environmental innovations by directing regulators and their resources toward specific, medium-by-medium activities— sometimes at the expense of alternative strategies that might more effectively address the highest environmental risks. For example, in our July 1997 report on EPA’s “reinvention” activities, we cited the difficulties in setting risk-based priorities across environmental media because each statute prescribes certain activities to deal with its own medium-specific problems. We also cited an observation from an earlier GAO report that environmental statutes “led to the creation of individual EPA program offices that have tended to focus solely on reducing pollution within the particular environmental medium for which they have responsibility, rather than on reducing overall emissions.” This “stovepipe” effect of the environmental statutory framework was cited by an EPA headquarters air official, who noted that the Clean Air Act would not recognize the value at a specific industrial site of a large reduction in water emissions in exchange for even a slight increase in air emissions--even though such a trade-off might have significant net environmental benefits in certain situations. As others have noted, however, EPA generally does consider the potential transfer of pollution from one medium to another when it develops new regulations. Several state officials told us that federal environmental statutes can indirectly hinder innovative state approaches not only by what they include, but also by what they omit. They noted that since environmental statutes give EPA little or no explicit authority to grant regulatory flexibility to the states, the agency is placed at a higher risk when it grants a state or regulated entity permission to deviate from federal requirements. One state official cited the absence of such a “safe legal harbor” for EPA as a key impediment to state innovation. State officials cited regulations as a significant factor more often than statutes. In discussing 20 specific innovative proposals, state officials ranked regulations either first or second 12 times among the federal factors listed in table 2. States cited a number of instances in which regulations prescribed an approach for dealing with an environmental problem that a state believed it could more effectively address in another way. Oregon officials cited such a proposal, pursued under the state’s Green Permit Program, in which the state sought to provide flexibility to a regulated facility as an incentive for improved environmental performance. The state’s Department of Environmental Quality proposed to grant a semiconductor manufacturing firm expedited permit review and various other incentives in exchange for the firm’s commitment to future environmental improvements through its environmental management system. As part of the application, the facility sought the approval of its system of correcting and detecting leaks in its hazardous waste piping from processes to storage tanks. According to a state official, the system’s overall performance matches or exceeds federal regulatory requirements, though it does not meet certain technical specifications of regulations under the Resource, Conservation, and Recovery Act (RCRA). As a result, EPA determined that it was unable to approve that particular aspect of the facility’s application. EPA did not rule out approval of this system, but stated that additional information would be required to justify it. An EPA official said that, after site visits and review of additional information provided by the facility, EPA Region 10 has concluded preliminarily that the required justification has been established. EPA and the state must now agree on a legally-enforceable alternative to the relevant RCRA requirements. EPA officials noted that the most likely approach, a site-specific rule, is a time-consuming approach that could take over 6 months. An Oregon official added that EPA is proceeding slowly on this issue both because it could set a precedent for numerous similar facilities across the nation and because the process is taxing limited regional staff resources. The Oregon experience is comparable to experiences cited by officials in other states in which a regulation either discouraged an innovation or imposed significant costs in pursuing the innovation. It is also comparable to the experiences documented in an extensive literature on the effect of prescriptive regulations on efforts to innovate. In summarizing part of this literature, the Environmental Law Institute (ELI) cited as a major problem the design of most regulatory standards under the Clean Water Act and Clean Air Act, which require EPA to establish technology-based discharge rate limits based on “available” or “feasible” emission control technologies. ELI noted that while alternative solutions are not specifically prohibited, such regulatory standards may preclude innovation in a number of ways, such as limiting permit writers to conservative choices and eliminating incentives for progress beyond established standards. ELI summarized the effect of prescriptive regulatory standards by noting that they “may severely limit innovation, creating higher costs than necessary.” Officials in EPA’s regions and headquarters both cautioned that federal regulations are critical in ensuring reasonable consistency in the level of environmental protection afforded to individuals across the country. Several officials also noted that there is a “natural tension” between this goal and the goal of allowing states greater flexibility to address environmental problems in the way they believe best meets their needs. Overall, however, they generally concurred with the comments voiced by state officials concerning the effects of detailed, prescriptive regulations on environmental regulatory innovation. An official with EPA’s Office of Air and Radiation added that it is important to remember that the federal environmental protection system is about 30 years old and that many regulations in effect today were written before the relatively recent emphasis on developing more flexible innovative approaches. State officials indicated that a long-standing EPA culture that resists alternative approaches to environmental protection is viewed as one of the most significant obstacles to state environmental innovation. The importance of cultural factors was evident in our discussions of the factors affecting progress on specific innovative proposals. Of the 20 individual proposals that the states discussed, EPA culture was cited as either the first or second most important factor in 14 cases. Some state officials noted that such cultural resistance often manifests itself in a lengthy and time-consuming review and approval process. One EPA regional official referred to the numerous levels of review, the large number of EPA stakeholders, and the degree to which every detail of a proposal is examined as a “death by 1,000 cuts,” saying that after such a review process, it is often hard to keep the original concept or retain what is truly innovative. Along these lines, an official in Massachusetts’ Department of Environmental Protection cited as an example the experience of a proposed addendum to its Project XL Agreement that established the state’s Environmental Results Program. The official said that EPA’s July 1999 response had included an extensive set of questions and comments that went well beyond what the state DEP had proposed, and was viewed by DEP staff as essentially asking the agency to justify the entire Environmental Results Program all over again. She added that DEP staff were frustrated not only by the volume of the questions posed, but also by the appearance that no one at EPA had been assigned to consolidate the numerous comments from various EPA offices. DEP’s reaction was to temporarily shelve the project, claiming that it did not have the resources to enter into protracted negotiations to resolve EPA’s concerns. According to the Commissioner, the subsequent intervention of the EPA Office of Enforcement and Compliance Assurance’s Policy Director helped to revive the proposal. Currently, DEP is awaiting EPA approval of a draft state rule containing the changes the state desires. New Jersey officials cited similar experiences during negotiations over the state’s Gold Track program, stating that some EPA program staff strongly resisted requests for regulatory flexibility. One official noted that EPA staff had exhibited a “what if” mentality when reviewing proposals—developing a worst possible case scenario and holding that scenario up as a reason to reject the proposal. This official added that the EPA approach appeared to focus more on a search for reasons not to pursue innovation, rather than on an examination as to whether the proposal was fundamentally sound and how it could best be implemented. EPA officials we interviewed also acknowledged the existence of an EPA culture predisposed to view innovative proposals skeptically. For example, an official of EPA’s Office of Solid Waste and Emergency Response noted that this cultural tendency is partly rooted in the fact that many EPA staff are used to addressing environmental problems in a “tried and true” way and that EPA’s reward system does not encourage staff to pursue innovative approaches. Similarly, an official of EPA’s Office of Air and Radiation noted that EPA has a culture somewhat resistant to new approaches, in part, because of its reluctance to deviate from approaches that it believes have proven effective over the last 30 years. The agency recognized the challenge of promoting acceptance of new approaches on the part of its rank-and-file in our July 1997 report on its reinvention efforts, which documented widespread agreement among EPA officials, state officials, and others that the agency has a long way to go before reinvention becomes an integral part of its staff’s every day activities. It cited the view of the then-head of EPA reinvention activities as noting that many staff are comfortable with traditional ways of doing business and consider their program-specific job responsibilities as their first priority and reinvention projects as secondary. Also commenting on EPA staffs’ comfort with traditional approaches, a senior ECOS official noted that EPA was created in the early 1970s, and that many current employees have spent their entire careers there. He noted that for some of them, a familiarity and comfort with earlier norms and practices may make it hard to embrace some of the agency’s recent experiments with alternative compliance strategies. However, EPA officials indicated that what may be perceived as “cultural resistance” among EPA staff may, in fact, reflect understandable concerns that they properly implement the agency’s core mission. An official with the agency’s Office of Policy, Economics, and Innovation added that in some cases, EPA staff may feel that specific regulations were the culmination of a good faith commitment made to stakeholders and members of the public who participated in the regulatory development process. An official of EPA’s New York office noted that EPA is obligated to ensure a certain level of environmental protection, and if proposed innovations could potentially negatively affect the environment, the benefits of moving forward must be carefully balanced against the risks. Another EPA official noted that close scrutiny is warranted in situations where an alternative approach may be viewed as setting a precedent for similar requests in situations where it may not be appropriate. An official of EPA’s Chicago office also noted that to allow deviation from regulatory requirements, EPA must develop an alternative legal mechanism to ensure accountability. Developing such legal mechanisms can be very time consuming. Perhaps most importantly, EPA staff are mindful of the potential consequences when innovative proposals are at odds with laws or regulations. A state official said that EPA has to be cautious in permitting innovative approaches because the agency is often sued by environmental and community groups if it does not follow laws and regulations to the letter. On the other hand, EPA and some state officials indicated that EPA’s disinclination to consider alternative approaches may be slowly changing. Officials of the state environmental agencies in Massachusetts and New Hampshire indicated that EPA’s Boston office has become a stronger advocate for flexibility and new approaches. For example, a Massachusetts official said the states in the region generally get a sympathetic hearing when they make proposals. The official also said that EPA’s Office of Enforcement and Compliance Assurance has also become more willing to consider innovative approaches. Similarly, the New Hampshire official stated that EPA is gradually changing the mindset of its staff to be more open to innovative proposals and that there is a healthy and respectful working relationship between the state and the agency’s Boston Office on these matters. Senior ECOS staff also told us that while further progress is needed, the agency has also sought to include state input earlier in its decision-making process to resolve long-standing data reporting problems and other key issues. While EPA officials acknowledged the key obstacles cited during our state interviews, they also told us that state innovative proposals sometimes encounter delays resulting from deficiencies in the form and content of the proposals. Project XL, the ECOS/EPA agreement, and other avenues for innovation each have certain ground rules on which participating parties agree. The EPA officials noted, and some state officials agreed, that in some cases a proposal’s rejection or delay may have less to do with an obstacle encountered at the federal level than with a problem in the proposal’s ability to meet these ground rules. As noted earlier, for example, Project XL requires that proposed innovative approaches result in “superior environmental performance,” in comparison to traditional approaches. According to EPA’s Chicago office staff, the difficulty in documenting compliance with this criterion was a primary point of contention regarding the XL proposal made by the Andersen Windows corporation with backing by the state of Minnesota. Among other things, Andersen Windows desired to obtain flexibility to change production processes without costly permit reviews under the Clean Air Act’s Prevention of Significant Deterioration regulations. In exchange, the firm proposed to establish a per-unit volatile organic compounds emissions rate of 0.763 pounds per unit of production (referred to as the performance ratio). The performance ratio ensures that future capacity increases would use less polluting processes, such as the substitution of water-based wood finishes for the solvent-based wood finishes the facility had traditionally used. Also, the project would adopt an overall emissions cap of 2,651 tons of volatile organic compounds per year. Although the proposed emission cap was above current actual emission levels, Andersen Windows contended that because it was below current allowable emissions, EPA should take into account the firm’s past efforts to reduce VOC emissions. EPA, on the other hand, wanted the project to commit to a level of emissions no higher than current actual emissions. EPA contended that there was no plausible scenario under which the facility would have emitted at a level near the proposed cap, and thus the proposal did not constitute a commitment to superior environmental performance. In response, the facility made a number of concessions, including the performance ratio limit, a lower overall emissions cap, and an explicit, enforceable commitment that any new paint processes would use less polluting materials. After extensive negotiations, EPA agreed to the proposal. The ECOS/EPA agreement also includes a series of principles to which signatories of proposals agree. Among them, proposals should include provisions for stakeholder involvement in a project, provisions for the enforcement of alternative regulatory requirements to ensure that public health and environmental protections are maintained, and a process for assessing the results of the innovative approach to test whether the desired results are actually achieved. Representatives of the Office of Enforcement and Compliance Assurance stated that state proposals do not always include an evaluation component, while others have not identified how stakeholder involvement would be assured. An official in EPA’s Chicago office also noted that some ECOS proposals did not meet the requirement that they be sufficiently limited in scope that they may be considered “experimental,” in order to minimize any risks if the initiative does not work as anticipated. For example, EPA initially resisted a Michigan proposal to take an innovative approach to controlling phosphorous discharges into state watersheds. Because the state initially proposed that this program be adopted in at least three watersheds and possibly statewide, EPA felt that its scope was not sufficiently limited to be considered an experiment. The project was approved after Michigan agreed to limit the proposal to a single watershed. Finally, project submittals may be subject to EPA’s “compliance screening guidance.” The guidance provides that participants in regulatory flexibility programs, such as Project XL and the EPA/ECOS agreement, have good overall compliance records. In particular, participation is deemed inappropriate if an applicant has been the subject of a recent criminal conviction, an ongoing criminal investigation, or ongoing EPA-initiated litigation. Participation may also be deemed inappropriate if an applicant has been involved in violations resulting in a serious threat to human health or the environment, a pattern of significant noncompliance, or is the subject of a citizen enforcement suit. Such screening guidance became a central issue in a Project XL proposal submitted by the Hopewell Regional Wastewater Treatment Facility in Virginia. The facility receives industrial wastewater from a variety of manufacturers, including makers of pulp and paper, organic chemicals, and plastics. As a result of federal pretreatment regulations under the Clean Water Act, the contributing manufacturers were faced with the requirement to add redundant pretreatment technology. Adding the technology would have adversely affected treatment performance at the Hopewell plant. Consequently, the Hopewell Regional Wastewater Treatment Facility and contributing sources proposed to move the application of pretreatment standards from the industrial users to the Hopewell plant. An EPA Deputy Regional Administrator expressed EPA’s support for the project and its desire to continue technical review of the proposal. However, the participation of two of the contributing firms was temporarily deferred pending the resolution of outstanding significant non-compliance at those facilities. The state subsequently resubmitted the proposal under the ECOS/EPA agreement. In July 2001, EPA indicated that the proposal could move forward to fuller development, but that the two firms with noncompliance issues could not participate until their enforcement cases were resolved. EPA has recently taken a number of measures to address at least some of the obstacles discussed in this report, and those changes may foster an improved climate for pursuing innovative state approaches. In June 2001, EPA adopted the recommendations of its Task Force on Improving EPA strategy on Innovating for Better Environmental Results. The EPA Task Force on Improving EPA regulations was created in April 2001 to reexamine EPA’s regulatory development process and identify ways to improve supporting scientific, economic, and policy analysis. In addition, the task force sought ways to enhance regulatory flexibility and to create strong partnerships with states and businesses. Among other key findings, the task force determined that in the process of developing regulations, EPA should develop and consider a broader array of policy options, including innovative alternatives and market-based approaches. Importantly, the task force report recommended that the regulations development process consider the possibility of innovative alternatives and that EPA strengthen the involvement of states and local governments during the regulatory development process. Should EPA follow through on this recommendation, it would help the agency address one of the key obstacles identified in this report—the effect of prescriptive EPA regulations in impeding innovative regulatory strategies. By involving state officials early in the regulations development process and identifying the potential effects of regulatory proposals at this stage, there is a greater chance that regulations will be developed in a manner that encourages, rather than inhibits, innovation. The strategy, however, applies to the development of new regulations rather than the obstacles posed by existing regulations. EPA’s Draft Strategy on Innovating for Environmental Results maintains that EPA’s efforts to promote innovation over the course of the last decade have made significant advances, but they have resulted in a disparate array of projects that were not designed to achieve system-wide improvement. Furthermore, it notes that the transaction costs have been high and that there has not been a consistent process for expanding the application of pilot programs. To address these issues, the strategy proposes a 4-pronged strategic framework: Strengthening EPA’s partnership with states, including a greater emphasis on performance management and the NEPPS process. Focusing on four priority issues: reducing greenhouse gases, reducing smog, restoring and maintaining water quality, and reducing the cost of water and water infrastructure. Diversifying environmental tools and approaches. Fostering a more innovative culture and organizational system at EPA and states. Among other things, the strategy emphasizes fostering an organizational culture at EPA that is more friendly to innovative approaches. Following up on EPA reinvention activities of the last 10 years, it states that EPA should integrate support for innovation into its planning, budgeting, and organizational systems. It also notes that a more innovative culture will require EPA staff to view their jobs more broadly; that is, not just as overseers of ongoing operations, but as problem solvers, partners, and facilitators. It also proposes to hold senior managers accountable for supporting innovative approaches and increasing their responsibilities for scaling up successful innovations. According to EPA officials, the process of diffusion and broader application of successful innovations may lead to gradual revision of existing regulations that may be inhibiting better ways of achieving environmental goals. The details of both EPA initiatives still need to be fleshed out and a number of issues resolved. For example, some state officials have questioned the focus of the Draft Strategy on Innovating for Environmental Results on four priority issues (greenhouse gases, smog, water quality, and water infrastructure), fearing that this focus downplays other issues of greater importance to individual states or localities. According to EPA, states will play a role in refining the Draft Strategy as it undergoes further development. How these and other issues are resolved will determine the ultimate impact these efforts have on EPA’s reinvention efforts in general and on its efforts to collaborate with states on innovative environmental proposals in particular. While states face a variety of obstacles when seeking to promote innovative approaches to environmental protection, we found their most significant obstacles to be at the federal level. Of these federal obstacles, the detailed requirements of prescriptive federal environmental regulations were cited as among the most significant, along with a cultural resistance among many EPA staff toward alternative approaches. In some cases, however, the underlying cause of this cultural resistance was traced back to the regulations. Specifically, many EPA staff believe that strict interpretations must be applied to detailed regulations if they are to be legally defensible. The identification by state officials of prescriptive federal regulations as a key obstacle to innovation is consistent with the findings of numerous research organizations that have cited the need for environmental regulations to focus more on the desired environmental results and, where possible, to be less prescriptive concerning the specific means of achieving these results. It is also consistent with EPA’s recent adoption of the recommendations of its own Task Force on Improving EPA Regulations which advocates, among other things, that innovative alternatives should be considered as new regulations are developed. It remains to be seen if implementation of the EPA recommendations will have the desired effect in reforming the regulations development process to better accommodate innovative proposals. Yet, however successful these efforts are in accounting for the impact of new regulations, they still do not focus on the key problem (documented by this report and by those of other organizations) concerning the impact of many existing prescriptive regulations on innovation, nor do other EPA initiatives resolve the problem. As noted in this report, current statutes are generally less prescriptive than the more detailed regulations by which they are implemented. However, the statutes contain no explicit language authorizing the use of innovative environmental approaches in lieu of specific regulatory requirements and, as noted in this report, this absence of a “safe legal harbor” for EPA has been a significant obstacle to states and others in their efforts to test innovative proposals. It has also tended to reinforce the cultural resistance to innovation that EPA is seeking to change. Accordingly, in the absence of legislative changes, the effectiveness of the agency’s innovation efforts will warrant monitoring by EPA and other stakeholders in the innovations process, and will also warrant continued congressional attention. We provided a draft of this report for review and comment to EPA and to ECOS’ headquarters office in Washington, D.C. EPA did not submit a formal letter but provided individual comments from several headquarters and regional offices that have dealt with the issues discussed in the report. From headquarters, we received comments from the Office of Air and Radiation, Office of Enforcement and Compliance Assurance, the Office of Solid Waste and Emergency Response, and the Office of Policy, Economics, and Innovation. The Office of Air and Radiation indicated general agreement with the report’s findings as did the Office of Enforcement and Compliance Assurance, which said that the report “reflects a balanced approach to analyzing such a broad topic and recognition of EPA’s recent efforts to facilitate innovative approaches to environmental protection.” The Office of Solid Waste and Emergency Response provided minor technical comments. Comments from all three offices were incorporated as appropriate. The Office of Policy, Economics, and Innovation (OPEI) commented on our conclusion that its initiatives to alleviate the impacts of EPA regulations focused on new regulations rather than existing regulations. The Office said that the report should recognize that a major thrust of its Draft Strategy on Innovating for Environmental Results involves the “scaling up” or “diffusion” of successful innovations to broader applications through the revision of regulations, policies, or program practices. We added language to reflect this as a key component of the EPA strategy. However, as OPEI staff acknowledged in a subsequent discussion about this point, the agency has yet to pursue this strategy in the type of systematic or large-scale manner that would be needed to deal materially with the large number of EPA regulations at issue, and has not evaluated the extent to which scaling up has been practiced or has succeeded. OPEI also observed that there may be some confusion in that the report identified two different ways in which statutes could inhibit state innovation: (1) by prescribing in detail how a program activity must be carried out (or by precluding alternatives) and (2) by omitting explicit language authorizing regulatory flexibility to proponents of innovation and regulators in a manner that would provide the “safe legal harbor” needed to assure the legality of their innovative proposals. The draft report discussed each of these potential impacts individually, but we added additional clarifying language in response to the OPEI comment. In addition to these two issues, OPEI offered a number of more detailed comments and suggestions, which we incorporated as appropriate. We also received comments from EPA’s Chicago, Dallas, New York, and Seattle offices. In addition to their technical comments and corrections, the Chicago, Dallas, and Seattle offices expressed general agreement with the material presented. The Dallas Office noted, for example, that “most of the views have been expressed by state contacts or facility representatives, but also have been shared by individual EPA employees that have worked on one or more innovations programs.” The New York Office provided no overall opinion, but offered a number of technical comments and corrections. These comments and corrections, and those of the other three regional offices, were incorporated as appropriate. ECOS’s Executive Director and his staff said that the draft report was fair and well documented. They noted in particular their agreement with the report’s findings that EPA regulations tend to be more of an obstacle to innovation than their underlying environmental statutes, and that a continued need exists for cultural change at both the state and EPA level. They also proposed a number of technical revisions and clarifications, which we incorporated in finalizing the report. To identify the major avenues through which states can achieve concurrence with EPA on innovative approaches to environmental protection, we interviewed officials with EPA’s headquarters and regional offices, officials from the Environmental Council of the States, and officials from other interest groups and research organizations. We also reviewed recent studies and other literature pertaining to states’ experience with innovative environmental regulatory strategies. To obtain information on the obstacles that states face when adopting innovative approaches to environmental protection, we interviewed cognizant officials from 15 states—Georgia, Massachusetts, Michigan, Minnesota, Nebraska, New Hampshire, New Jersey, New York, Oregon, Pennsylvania, Tennessee, Texas, Virginia, Washington, and Wisconsin. We intentionally selected a sample of states that was diverse in size, was representative of different EPA regions, and had varying degrees of experience with environmental regulatory innovation. To obtain further diversity in the initiatives we examined, we asked the state officials to identify two of their major innovative proposals—one that they pursued and EPA accepted and one that was proposed and not accepted. For each, we first sought written information in advance of our interviews with cognizant state officials. Then, through our interviews with these officials, we sought to obtain a fuller understanding of the circumstances surrounding each initiative, and to identify the obstacles that may have inhibited or prevented progress. For the states in which officials elected not to identify initiatives pursued with EPA, we sought to identify the factors influencing their reasons for not doing so. In addition to these state interviews, we conducted a series of interviews with the corresponding EPA regional offices to obtain their views about the obstacles to state environmental innovation in general and to gather information about their experiences with the specific initiatives identified by states in their jurisdiction. We also interviewed officials with EPA headquarters offices including the Office of Policy, Economics, and Innovation; the Office of Enforcement and Compliance Assurance; and key program offices that have had experiences with innovative state regulatory proposals. We conducted our work from March through December 2001 in accordance with generally accepted government auditing standards. As we agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. We will then send copies to others who are interested and make copies available to others who request them. If you or your staff have any questions about this report, please call me or Steve Elstein at (202) 512-3841. Key contributors to this report are listed in appendix II. Description of innovations cited by state officials Cathode ray tubes in computer equipment are a growing waste problem because of the high turnover rates for computer equipment. The Massachusetts Department of Environmental Protection wanted to create a system for reusing and recycling these parts, but ran into difficulties because the parts are classified as hazardous waste under the Resource Conservation and Recovery Act (RCRA) due to their high lead content. The state undertook a number of actions, including exempting intact CRTs as hazardous waste, to increase reuse and recycling efforts in the state. The Environmental Results Program is a regulatory system established under Project XL designed to streamline permitting and reporting requirements and improve performance in the printing, photo processing, and dry cleaning sectors. The state sets out to accomplish this through the use of industry-wide performance standards and self-certification of compliance. In the future, Massachusetts would like to expand this program to other industrial sectors. Under this ECOS agreement project, the Michigan Department of Environmental Quality (MDEQ) adopted a new watershed approach to meet TMDL requirements for phosphorus in the Lake Allegan Watershed. This new approach utilizes a cooperative agreement between point source dischargers, non-point dischargers, and the MDEQ to establish the necessary reduction allocations among the various sources. The resulting allocation for the point source dischargers will then be written into the next round of National Pollution Discharge Elimination System permits. The Clean Air Act requires a case-by-case Best Available Control Technology (BACT) analysis for auto assembly plant painting and coating operations. Whenever a facility makes any changes to its technology, it must go through this time-consuming process, even though the BACT is typically the same in each case. With this ECOS agreement, the Michigan Department of Environmental Quality will test an innovative permitting approach under which a 3 year BACT analysis will be developed for specific automotive painting and coating sources. For a 3 year period, an auto assembly facility will be able to use this 3 year BACT in lieu of performing a completely new analysis. This new approach will save resources, which can then be used for other activities with greater environmental benefits. Under this Project XL agreement, the Andersen Window Corporation is testing a new approach to reducing air emissions through the use of a performance ratio. This ratio will measure the amount of volatile organic compound (VOC) emissions per unit of production. The facility can make changes to its processes as long as it stays below the performance ratio and the facility-wide VOC cap. This performance-based system will give the facility flexibility and provide an incentive for improved environmental performance. The 3M Hutchinson plant was one of the original participants in Project XL. The company’s proposal sought to develop a multimedia permit that would cover the facility’s air emissions, storm water management, liquid storage facility requirements, and hazardous waste generator requirements. In exchange, 3M would commit to a number of requirements intended to enhance the facility’s environmental performance. Eventually, this proposal was withdrawn from Project XL. Description of innovations cited by state officials In April 2002, Groveton Paper Board, Inc. would have been required to install a $1 million system to capture and incinerate emissions of airborne methanol. The company found an alternative pollution control technology that has the potential to cut methanol emissions by four times what is required by law, while saving the company $825,000. In addition, the new technology will reduce 20 tons per year of other hazardous air pollutants. Over 250 sites in Nashua and Hudson, New Hampshire were contaminated with asbestos when a local asbestos manufacturing plant delivered asbestos to landowners to use as fill. EPA determined that these sites qualified as “inactive disposal sites” and “stationary sources” under the National Emission Standards for Hazardous Waste Pollutants (NESHAPS). As a result, the sites were subject to a number of requirements, many of which were unreasonable for homeowners. The New Hampshire Department of Environmental Services worked with EPA to find a reasonable solution. Eventually, they used a mechanism in 40 CFR 63.93 that allows a state rule to be substituted for the federal regulation. In September, they provided a draft proposal to EPA, and currently they are working with EPA for a resolution. The Gold Track Program is a Project XL initiative. It is part of a tiered system that is designed to reward companies that commit to higher levels of environmental performance. The Gold Track is the highest tier in the system and it provides recognition and regulatory flexibility for facilities that commit to the highest standards of environmental performance. The IBM Fishkill facility is a manufacturer of semiconductor and electronic computing equipment. The facility’s wastewater sludge is classified as hazardous waste under the Resources Conservation and Recovery Act. The facility would like to test an alternative approach that involves recycling this waste for reuse in cement. Under Project XL, EPA has decided to grant regulatory flexibility to the facility to recycle the sludge. Under RCRA, generators of hazardous waste must transport their waste to permitted treatment, storage, and disposal facilities. Under this agreement, public utilities in New York State will be able to consolidate the waste from remote locations at a central collection facility and store it there for up to 90 days before transporting it to one of these facilities. This project is intended to increase public safety by facilitating removal of hazardous waste and decreasing the risk of accidental release; to increase efficiency of transportation of hazardous wastes for public utilities; and to save time and resources for public utilities and the New York Department of Environmental Conservation. Description of innovations cited by state officials Established by state legislation, the Green Permits Program is designed to encourage facilities to achieve greater environmental performance than required by law, and to adopt environmental management systems in exchange for incentives such as regulatory flexibility, public recognition, and a single point of contact with the agency. EPA’s involvement is spelled out in a memorandum of agreement (MOA) between the Oregon Department of Environmental Quality, the Lane Regional Air Pollution Authority, and EPA. The MOA is based on the principles of the Joint State/EPA Agreement to Pursue Regulatory Innovation. Currently seven facilities are participating in the program. LSI Logic is a semiconductor facility in Gresham, Oregon, that participates in the Green Permits Program. Among other things, the facility’s Green Permits Application requests equivalency under Subpart BB of the Resource Conservation and Recovery Act, which is related to monitoring, detection, and repair of leaks from equipment that handles hazardous waste. LSI Logic contends that its equipment, while not meeting the exact requirements of the regulations, performs in a manner that is equal or superior to the technology that is required. EPA and the state have preliminarily determined that the firm’s approach is acceptable, and the parties are now in the process of identifying a legally-enforceable alternative for the facility, such as a site-specific rule. This Project XL program is designed to encourage coal miners to remine and reclaim abandoned coal mine sites. Under current regulations, operators must meet numeric limits under the National Pollutant Discharge Elimination System (NPDES) at individual discharge points. Operators may be reluctant to engage in remining activities because they may exceed these limits because of pre- existing discharges from closed mines. Under this project, operators can use Best Management Practices and monitor the concentration of pollutants in-stream, which is expected to reduce risk and expense to coal mine operators, improve water quality, and increase the number of operators participating in remining and reclamation activities. The Lucent Technologies Microelectronics Group entered into a Project XL Agreement with EPA that is designed to test whether an environmental management system (EMS) could be used to develop a single document to cover all environmental aspects of a regulated entity that has demonstrated superior environmental performance. It will also explore, among other things, whether it is appropriate to use an EMS as a basis for granting regulatory flexibility and if there are regulatory approaches that are cheaper, cleaner, and smarter ways of protecting the environment. Established under the Joint EPA/State Agreement to Pursue Regulatory Innovation, this initiative seeks to allow “barge scale” (iron oxide) material produced during the barge-cleaning process as a marketable product. Currently classified as either industrial or hazardous waste, the material is transported and treated at an off-site RCRA facility, with any remaining residue placed in an authorized landfill. Under this agreement, the participating facility would use its onsite thermal oxidizer to convert the material for use as a product. This project is expected to result in reduced risk for exposure to hazardous materials for employees, the public, and the environment and in resource savings for the participant. Description of innovations cited by state officials The Merck Stonewall plant is located near the Shenandoah National Park in Virginia—an area of special concern for air quality. Merck was one of the first participants in Project XL and its proposal was designed to improve air quality in the area. Under the agreement, Merck agreed to convert its coal-burning powerhouse to burn natural gas, resulting in lower levels of emissions. In exchange for this commitment, the facility would be allowed to function under an emissions cap for criteria pollutants, allowing Merck to make process changes without first obtaining EPA approval. This proposal, submitted under the Joint State/EPA Agreement to Pursue Regulatory Innovation, seeks EPA’s approval for a modification of pretreatment requirements for the Hopewell Regional Wastewater Treatment Facility under the Clean Water Act. The facility treats wastewater from a number of industrial facilities and current regulations require that standards for water quality must be met at the industrial users’ end-of-pipe. The standards were designed for treatment facilities that treat domestic wastewater and because the facility only treats industrial wastewater, the Hopewell Wastewater Treatment Facility would like these requirements modified to allow it to meet the standards at its own end-of-pipe, thus eliminating redundant treatment processes and resulting in improved quality in the receiving stream. The Environmental Cooperation Pilot Program (ECPP) was developed by the Wisconsin Department of Natural Resources to allow facilities to test innovative approaches to environmental protection in exchange for superior environmental performance. Through the program, which is supported by the Wisconsin statute, the DNR is authorized to enter into agreements with up to 10 different facilities in the state. The Pleasant Prairie Power Plant is one of the participating facilities. Under the agreement, the facility commits to a number of measures, including the use of pollution prevention techniques and the adoption of an environmental management system. In exchange, the facility will enjoy the benefits of alternative monitoring, reduced reporting, permit streamlining and recovery and combustion of ash stored in the company’s landfills. The Project XL proposal for Wisconsin Electric Power Company was designed to create an integrated, multi-pollutant air quality approach for all six of the company’s coal burning power plants. Under the agreement, Wisconsin Electric would meet certain limits for sulfur dioxide, nitrogen oxides, and particulate matter that are more stringent than current requirements. In exchange for this, Wisconsin Electric would be granted flexibility in making certain changes at the facilities. Specifically, it would be exempt from some of the requirements for New Source Review, Prevention of Significant Deterioration and New Source Performance Standards if the changes meet certain qualifications. This agreement was expected to give Wisconsin Electric incentive to make improvements to the system and to result in lower emissions, while resulting in cost savings due to paperwork reduction and efficiency gains for Wisconsin Electric and Wisconsin Department of Natural Resources. To date, EPA has not approved this proposal. Georgia, Nebraska, Tennessee, and Washington also participated in interviews, but they did not identify an innovation that they proposed to EPA. In addition to the individual named above, Mike Hartnett and Stephanie Luehr contributed significantly to this report. Kimberly Clark, Karen Keegan, and Jonathan McMurray also made significant contributions. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full-text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO E-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily e-mail alert for newly released products” under the GAO Reports heading. Web site: www.gao.gov/fraudnet/fraudnet.htm, E-mail: [email protected], or 1-800-424-5454 or (202) 512-7470 (automated answering system).
The Environmental Protection Agency (EPA) issues regulations that states, localities, and private companies must comply with under the existing federal approach to environmental protection. This approach has been widely criticized for being costly, inflexible, and ineffective in addressing some of the nation's most pressing environmental problems. The states have used several methods to obtain EPA approval for innovative approaches to environmental protection. Among the primary approaches cited by the state environmental officials GAO interviewed are EPA's Project XL and the Joint EPA/State Agreement to Pursue Regulatory Innovation. Officials in most states told GAO that they faced significant challenges in submitting proposals to EPA, including resistance from within the state environmental agency and a lack of adequate resources to pursue innovative approaches. EPA recognizes that it needs to do more to encourage innovative environmental approaches by states and other entities. As a result, EPA has (1) issued a broad-based draft strategy entitled "Innovating for Better Environmental Results" and (2) adopted the recommendations of an internal task force, which advocated the consideration of innovative alternatives as new regulations are developed.
Drug court programs are designed to address the underlying cause of an offender’s behavior—alcohol, drug addiction, and dependency problems. Drug court programs share several general characteristics but vary in their specific policies and procedures because of, among other things, differences in local jurisdictions and criminal justice system practices. In general, judges preside over drug court proceedings, which are called status hearings; monitor offenders’ progress with mandatory drug testing; and prescribe sanctions and incentives as appropriate in collaboration with prosecutors, defense attorneys, treatment providers, and others. Drug court programs vary in terms of the substance-abuse treatment required. However, most programs offer a range of treatment options and generally require a minimum of 1 year of participation before an offender completes the program. Practices for determining defendants’ eligibility for drug court participation vary across drug court programs, but typically involve screening defendants for their criminal history, current case information, whether they are on probation, and their substance use, which can include the frequency and type of use, prior treatment experiences, and motivation to seek treatment. In 2005, we reported that based on literature reviewed, eligible drug-court program participants ranged from nonviolent offenders charged with drug-related offenses who had substance addictions, to relatively medium risk defendants with fairly extensive criminal histories and who had failed prior substance-abuse-treatment experiences. Appendix IV presents additional information about the general characteristics of drug court programs. As shown in appendix V, BJA, in collaboration with the National Association of Drug Court Professionals (NADCP), identified The Key Components, which describes the basic elements that define drug courts and offers performance benchmarks to guide implementation. BJA administers the Adult Drug Court Discretionary Grant Program to provide financial and technical assistance to states, state courts, local courts, units of local government, and Indian tribal governments to develop and implement drug treatment courts. Through the Adult Drug Court Discretionary Grant Program, BJA offers funding in four broad drug- court grant categories. See appendix VI for a more detailed discussion on each of the following grant categories. Implementation grants: Available to jurisdictions that have completed a substantial amount of planning and are ready to implement an adult drug court.  Enhancement grants: Available to jurisdictions with a fully operational (at least 1-year) adult drug court.  Statewide grants: Available for two purposes: (1) To improve, enhance, or expand drug court services statewide through activities such as training and/or technical assistance programs for drug court teams and (2) To financially support drug courts in local or regional jurisdictions that do not currently operate with BJA Adult Drug Court Discretionary Grant Program funding. Joint grants: In fiscal year 2010, BJA, in collaboration with the Department of Health and Human Services, Substance Abuse and Mental Health Services Administration (SAMHSA), offered a joint grant program for the enhancement of adult drug court services, coordination, and substance-abuse treatment capacity. From fiscal years 2006 through 2010, Congress appropriated about $120 million for DOJ’s administration of all drug court programs. Of this amount, $76 million was used for the Adult Drug Court Discretionary Grant Program, which includes funding provided to grantees through the previously mentioned grant categories. The grant award totals for the Adult Drug Court Discretionary Grant Program increased from $2 million in fiscal year 2006 to $29 million in fiscal year 2010. Correspondingly, the number of Adult Drug Court Discretionary Grant Program awards increased from 16 in fiscal year 2006 to 110 in fiscal year 2010—an increase of 588 percent, as shown in figure 1. With regard to drug courts’ effectiveness, however, drug courts have been difficult to evaluate because they are so varied, and the resources required to conduct a study that would allow conclusions about the effectiveness of drug courts can be substantial. In particular, while drug courts generally adhere to certain key program components, drug courts can differ in factors including admission criteria, type and duration of drug treatment, degree of judicial monitoring and intervention, and application of sanctions for noncompliance. In February 2005, we studied drug courts and reported that in most of the 27 drug-court program evaluations we reviewed, adult drug court programs led to recidivism reductions during periods of time that generally corresponded to the length of the drug court program. Several syntheses of multiple drug court program evaluations, conducted in 2005 and 2006, also concluded that drug courts are associated with reduced recidivism rates, compared to traditional correctional options. However, the studies included in these syntheses often had methodological limitations, such as the lack of equivalent comparison groups and the lack of appropriate statistical controls. BJA collects an array of performance data from its adult drug court grantees through its Performance Measurement Tool (PMT) and OJP’s Grants Management System (GMS). Since fiscal year 2008, BJA has required grantees to submit quantitative performance data on a quarterly basis and qualitative performance information on a semi-annual basis. The quantitative information grantees submit to BJA varies depending on the type of grant awarded. For example, information that BJA can calculate based on what Implementation grantees have been required to submit quarterly includes “the percent of drug court participants who exhibit a reduction in substance use during the reporting period,” “the percent of program participants who re-offended while in the drug court program,” and “the number and percent of drug court graduates.” Information that BJA can calculate based on what Enhancement grantees have been required to submit includes “the increase in units of substance- abuse treatment services” and “the percent increase in services provided to participants.” In addition to the quarterly reporting of quantitative performance data, all adult drug court grantees must submit progress reports semi-annually. As part of these progress reports, grantees provide qualitative or narrative responses to seven questions. Table 1 shows the seven questions to which grantees must submit narrative responses when completing their semi-annual reports. BJA officials told us that grant managers regularly review individual grantees’ quarterly performance data and semi-annual progress reports and use this information to determine whether additional training or technical assistance could improve their performance. However, according to BJA officials, resource constraints in the past had prevented staff from fully analyzing the performance data BJA collects from all adult drug court grantees—specifically the analysis of grantees’ answers to the seven narrative questions—to identify more effective program approaches and processes to share with the drug court community. In early fiscal year 2011, BJA officials initiated a new process called GrantStat to maximize the use of performance information by leveraging the resources of other BJA divisions, BJA’s training and technical assistance partners, its contractor, and other key stakeholders. GrantStat provides an analytical framework to assess grantee performance data and other relevant information on a semi-annual basis to determine the effectiveness of the grant programs in BJA’s portfolio. In September 2011, BJA officials applied GrantStat to a review of the Adult Drug Court Discretionary Grant Program. As part of the process, they collected, reviewed, and analyzed performance data and other relevant information from a cohort of Implementation grantees to determine the overall effectiveness of the adult drug court program and to identify grantees that might need additional technical assistance to improve their outcomes. BJA officials told us that as part of the GrantStat review, they and their technical-assistance provider’s staff reviewed selected Implementation grantees’ responses to the seven narrative questions and discussed common issues they each identified. For example, BJA identified that a number of grantees had lower-than- expected capacity because drug court stakeholders (e.g., district attorneys) were referring fewer drug-involved defendants to these drug courts. BJA also reported reviewing and discussing other qualitative information, such as the training and technical assistance provider’s site- visit reports, to determine grantees’ fidelity to the 10 key components. BJA officials acknowledged that prior to GrantStat, they had not leveraged the summary data that its technical assistance providers had previously compiled from grantees’ narrative responses to these seven questions and indicated that future iterations of GrantStat would continue to include both qualitative and quantitative performance data reviews. Our prior work has emphasized the importance of using performance data to inform key decisions and underscored that performance measures can be used to demonstrate the benefits of a program or identify ways to improve it. In addition, we also have reported that effective performance measurement systems include steps to use performance information to make decisions. In doing so, program managers can improve their programs and results. Recognizing that BJA is working through GrantStat to improve its use of performance data in managing the drug court program, we identified six management activities for which performance information can be most useful to decision makers and benchmarked BJA’s practices against them. The six activities are: (1) setting program priorities, (2) allocating resources, (3) adopting new program approaches, (4) identifying and sharing with stakeholders more effective program processes and approaches, (5) setting expectations for grantees, and (6) monitoring grantee performance. See appendix VII for the definition of the six management activities. As illustrated in table 2, BJA has current and planned efforts underway across all six activities. According to BJA officials, after the GrantStat review, they identified trends and developed several potential findings and action items for program design changes. However, BJA officials added that since the action items originated from GrantStat’s first review, they are not implementing them immediately. Instead, BJA plans to evaluate the action items over the next 6 months to ensure they are feasible and effective alternatives for improving grantee outcomes. We are encouraged by BJA’s recent efforts to regularly analyze grantee performance data to determine whether the program is meeting its goals. We also are encouraged that BJA is using this information to better inform its grant-related management activities, such as setting program priorities, identifying and sharing effective processes and approaches, and setting expectations for grantees. During the course of our review, BJA revised its adult drug court program performance measures to improve their reliability and usefulness. BJA provided us with the revised measures on October 28, 2011. According to BJA officials, unclear definitions of some of the previous measures confused grantees about what data elements they were expected to collect. For example, officials told us that grantees may have been confused with how to measure “the number of participants admitted” and “the number of drug court participants.” Specifically, BJA officials added that their analysis of several years of data shows that some grantees reported the same number for these two measures, some grantees reported a higher number than were admitted, a few grantees reported a lesser number for the number of participants than the number admitted, and some grantees reported these two measures in each of these three ways over multiple reporting periods. According to BJA officials, such a wide degree of variability made these measures unreliable, and BJA was thus hindered from comparing grantee performance data across grantee cohorts. BJA’s performance measure revisions resulted in the following:  All grantees are required to report on “participant level” measures. Examples of these measures include the demographic make-up of their drug court participant populations, the amount of service provided to their participants, and the geographic location of their drug courts;  Enhancement, Joint, and Statewide grantees are required to report on participant level outcomes, such as graduation rates, to ensure consistency with measures BJA collects from Implementation grantees;  Measures previously excluded from the PMT, such as retention rates and outcomes of participants once they complete the drug court program, are now included;  BJA has established two sets of benchmarks as points of reference against which to gauge grantees’ performance. The first set of benchmarks requires a comparison of grantees’ performance against averages of drug court performance derived from research. The second set of benchmarks requires a comparison of grantees’ performance to historical performance data reported to BJA by adult drug court grantees; and  BJA revised the descriptions and the definitions of the measures to help ensure their clarity. To revise the performance measures, BJA officials consulted with technical assistance providers and a drug court researcher to discuss possible improvements to the performance measures, reviewed drug court literature, and reviewed and analyzed BJA grantees’ clarification and information requests to identify the most common problems adult drug court grantees historically experienced submitting performance information to BJA. In addition, BJA obtained comments on the proposed measures from BJA staff and other DOJ stakeholders, as well as Enhancement, Implementation, Joint, and Statewide grantees. BJA officials also invited all current grantees to participate in four teleconferences to obtain their feedback on the feasibility of collecting and reporting the new measures and their suggestions to improve the clarity of the measures’ definitions and descriptions. BJA officials finalized the new measures in October 2011 and plan to closely monitor grantees’ performance data submissions to ensure the reliability and usefulness of the measures and then revise as necessary after the first reporting period. BJA officials also stated that they expected to review the measures’ overall reliability and validity after the first reporting period— October 1, 2011, through December 30, 2011. BJA officials reported that the revised measures will strengthen the reliability and improve the usefulness of grantee performance data in making grant-related decisions. For example, BJA officials stated that reliable and useful data would help them to identify the most effective grantees and common characteristics these courts share to inform the types of drug courts the officials choose to fund in future grant solicitations. BJA officials also reported that as a result of the revision, they expect to be able to conduct more sophisticated analyses using GrantStat that are needed to inform grant-related decisions. For example, BJA officials told us that implementing benchmarks and participant level measures will enable the agency to compare similar drug courts (e.g., large-urban jurisdictions of similar size, demographic make-up, and geographic context) to one another and across jurisdictions, thereby improving BJA’s understanding of grantees’ impact on the populations they serve. BJA’s process to revise its performance measures generally adhered to some of the key practices that we have identified as important to ensuring that measures are relevant and useful to decision-making. These key practices included obtaining stakeholder involvement and ensuring that the measures have certain key attributes, such as clarity. The key practices also describe the value of testing the measures to ensure that they are credible, reliable and valid and documenting key steps throughout the revision process. However, BJA could take actions to improve its efforts in these two areas. For instance, BJA officials told us that after the grantees’ first reporting period concludes, they plan to assess the data that grantees submitted to ensure that the measures produce reliable and useful data over at least the first quarter of fiscal year 2012. They stated that if necessary, at that point they will then further revise the measures. Nevertheless, BJA officials have not documented how they will determine if the measures were successful or whether changes would be needed. In addition, BJA officials did not record key methods and assumptions used to guide their revision efforts, such as the feedback stakeholders provided and BJA’s disposition of these comments. For example, BJA officials provided a document generally showing the original performance measure; whether it was removed, revised or replaced; and BJA’s justification for the action, but this document did not demonstrate how BJA had incorporated the stakeholder feedback it considered when making its decisions. The document also did not include a link to a new performance measure in instances where an older one was being replaced. Further, BJA’s justification did not include the rationale for the changes it made to 22 of the 51 performance measures. According to BJA officials, they did not document their decisions in this way because of the rapid nature of the revision process and limited staff resources. They also told us that maintaining such documentation and providing it to stakeholders held little value. Our previous work has shown the importance of documentation to the successful development of effective performance measures. In the past, we have reported that revising performance measures involves a number of aspects needing to be carefully planned and carried out and that by documenting the steps undertaken in developing and implementing the revised measures, agencies can be better assured their revisions result in effective performance measures. In addition, academic literature on the best practices for developing effective performance measures states that agencies should develop products to document and guide their revision efforts. These products, among other things, can include plans for ensuring the quality and integrity of the data for full-scale implementation of the measures. Further, Standards for Internal Control in the Federal Government call for clear documentation of significant events, which can include assumptions and methods surrounding key decisions, and this documentation should be readily available for examination. As BJA moves forward in assessing the revised measures and implementing additional changes, if it deems necessary, BJA could better ensure that its efforts result in successful and reliable metrics and are transparent by documenting key methods used to guide revision efforts and an assessment of its measures. This would also help bolster the integrity of its decisions. In the evaluations we reviewed, adult drug-court program participation was generally associated with lower recidivism. Our analysis of evaluations reporting recidivism data for 32 programs showed that drug court program participants were generally less likely to be re-arrested than comparison group members drawn from the criminal court system, although the differences in likelihood were reported to be statistically significant in 18 programs. Across studies showing re-arrest differences, the percentages of drug court program participants rearrested were lower than for comparison group members by 6 to 26 percentage points. One program did not show a lower re-arrest rate for all drug-court program participants relative to the comparison group within 3 years of entry into the program, although that study did show a lower re-arrest rate for drug court participants who had completed the program than for members of the comparison group. In general, the evaluations we reviewed found larger differences in re-arrest rates between drug-court program completers and members of the comparison group than between all drug-court program participants and the comparison group members. The rearrest rates for program completers ranged from 12 to 58 percentage points below those of the comparison group. The completion rates reported in the evaluations we reviewed ranged from 15 percent to 89 percent. Included among the evaluations we reviewed was the MADCE, a 5-year longitudinal process, impact, and cost evaluation of adult drug courts. The MADCE reported a re-arrest rate for drug court participants that was 10 percentage points below that of the comparison group; specifically, 52 percent of drug court participants were re-arrested after the initiation of the drug court program, while 62 percent of the comparison group members were re-arrested. However, the 10 percentage point difference between these rearrest rates for the samples of drug court participants and comparison group members was not statistically significant. The MADCE study also reported that drug court participants were significantly less likely than the comparison group to self-report having committed crimes when they were interviewed 18 months after the baseline (40 percent vs. 53 percent), and drug court participants who did report committing crimes committed fewer than comparison group members. We assigned a numerical rating to each evaluation to reflect the quality of its design and the rigor of the analyses conducted. Our methodology for rating the evaluation studies is detailed in appendix III. After assigning the rating, we grouped the studies into two tiers. Tier 1 studies were the most carefully designed and incorporated substantial statistical rigor in their analyses. Tier 2 studies, while still meeting our basic criteria for methodological soundness, were relatively less rigorous in their design and analyses. Both tier 1 and tier 2 studies reported differences between drug court participants and comparison group members and both sets of studies found that some but not all differences were statistically significant. Table 3 shows whether a difference in recidivism rates was reported for each program—expressed as the difference in the rate of re-arrest between all drug court program participants and the comparison group. In some cases the difference in recidivism was reported as something other than a difference in the re-arrest rate, such as a difference in the number of arrests or the relative odds of an arrest. In those cases, table 3 notes that a difference was reported, but does not include the difference in re- arrest rates. For example, the evaluation of the Queens Misdemeanor Treatment Court reported that the re-arrest rate for program participants was 14 percentage points lower than the re-arrest rate of comparison group members up to 2 years after participants entered into the program, and 10 percentage points lower at 3 or more years after entry. Similarly, the evaluation of the Hillsborough County Adult Drug Court reported a statistically significant difference in the relative odds of an arrest after drug court program enrollment but did not report the difference in rearrest rates, therefore table 3 indicates a statistically significant reduction in rearrest rates but does not show the difference in rates. The evaluations we reviewed showed that adult drug-court program participation was also associated with reduced drug use. Our analysis of evaluations reporting relapse data for eight programs showed that drug court program participants were less likely than comparison group members to use drugs, based on drug tests or self-reported drug use, although the difference was not always significant. This was true for both within-program and post-program measures, and whether drug use was reported as the difference in the frequency of drug use or the proportion of the treatment and comparison groups who used drugs. The MADCE concluded drug courts produce significant reductions in drug relapse. Specifically, MADCE reported that “drug court participants were significantly less likely than the comparison group to report using all drugs (56 vs. 76 percent) and also less likely to report using ‘serious’ drugs (41 vs. 58 percent), which omit marijuana and ‘light’ alcohol use (fewer than four drinks per day for women or less than five drinks per day for men). On the 18-month oral fluids drug test, significantly fewer drug court participants tested positive for illegal drugs (29 vs. 46 percent). Further, among those who tested positive or self-reported using drugs, drug court participants used drugs less frequently than the comparison group.” Regarding post-drug court program relapses, the MADCE concluded that participation in drug court—along with less frequent drug use among offenders prior to arrest, and the absence of mental health problems— were the strongest predictors of success against relapses. Table 4 summarizes the results of drug-use relapse reported in the evaluations we reviewed. Of the studies we reviewed, 11 included sufficient information to report a net benefit figure. Of these studies, the net benefit ranged from positive $47,852 to negative $7,108 per participant. The net benefit is the monetary benefit of reduced recidivism accrued to society from the drug court program through reduced future victimization and justice system expenditures, less the net costs of the drug court program—that is, the cost of the program less the cost of processing a case in criminal court. A negative net benefit value indicates that the costs of the drug court program outweigh its estimated benefits and that the program was not found to be cost beneficial. Eight of the studies reported positive net benefits—the benefits estimated to accrue from the drug court program exceeded the program’s net costs. Three of the 11 studies reported negative net benefits. We did not attempt to determine whether the differences in the reported values were because of differences in study methodology or the attributes of the drug courts themselves. The environment in which the drug court operates may also be important. For example, the largest net benefit reported was for Kings County, in which members of the comparison group were incarcerated, in contrast to other programs in which members of the comparison group were given probation, which is less costly. The more costly the alternative, such as incarceration, the more likely a drug court will have positive net benefits. In this case, the study reported that society would accrue $47,852 in benefits relative to conventional court processing. Table 5 below shows whether, based on the available information, the study was shown to be cost beneficial. It also shows the net benefits per participant of the drug court study. For example, MADCE found that the drug court participants led to a net benefit of $6,208 per participant— within the range of the other studies. The MADCE analysis of costs and benefits is discussed further in appendix II. During the course of our review, BJA made strides in managing its adult drug court program, including implementation of the GrantStat process and recent revisions to the grantee performance measures. Given that BJA has committed to testing its new measures during this first grantees’ reporting period, enhancements could be made to facilitate this assessment. By documenting how it plans to assess the measures and determine any changes that may be needed and providing the rationale for future revisions, BJA could bolster the transparency and integrity of its decisions. Doing so could also improve the reliability of the data it collects, its usefulness to managers in guiding the program, and the success of its measures. Recognizing that BJA has recently revised the adult drug-court performance measures and has plans to assess their utility, we recommend that BJA’s Director take the following action to ensure that its revision process is transparent and results in quality and successful metrics to inform management’s key decisions on program operations:  Document key methods used to guide future revisions of its adult drug-court program performance measures. This documentation should include both a plan for how BJA will assess the measures after conclusion of the grantees’ first reporting period and a rationale for why each measure was refined, including a discussion of the scope and nature of any relevant stakeholder comments. We provided a draft of this report to DOJ for review and comment. On December 1, 2011, we received written comments on the draft report from DOJ, which are reproduced in full in appendix VIII. DOJ concurred with our recommendation and described actions under way or planned to address the recommendation. DOJ also provided technical comments, which we incorporated as appropriate. DOJ stated that BJA will continue to document grantee feedback and will ensure that revisions to the measures are documented in accordance with GAO’s best practices standards. In particular, DOJ stated that BJA will document (1) whether the name and definition of the measure is consistent with the methodology used to calculate it; (2) whether the measure is reasonably free from bias; (3) whether the measure meets the expectation of the program; and (4) its rationale for why each performance measure was refined, including the scope and nature of any relevant stakeholder comments. We believe that such actions would improve the reliability of the information collected, its usefulness to managers in making key decisions on program operations, and the success of its measures. We are sending copies of this report to the Attorney General and interested congressional committees. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. Should you or your staff have any questions concerning this report, please contact me at (202) 512-9627 or by e-mail at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IX. The following provides the current status of the seven recommendations we made in 2002—which have since been closed—on DOJ’s collection of performance data. Specifically, DOJ has fully implemented six of them and partially implemented one. DOJ has plans to fully address the remaining recommendation related to analyzing performance and outcome data collected from grantees and reporting annually on the results. Table 6 reflects this status. NIJ’s MADCE was conducted by the Urban Institute, Center for Court Innovation, and Research Triangle Institute. Data were collected from 1156 drug court participants in 23 different drug courts in 7 geographic clusters and from a comparison group of 625 drug-involved offenders in six different sites in four geographic clusters. Data collected included: three waves of interviews; drug tests; administrative records on treatment, arrests, and incarceration; court observation and interviews with staff and other stakeholders; and budget and other cost information. The evaluation was designed to address the following four questions: (1) Do drug courts reduce drug use, criminal behavior, and other associated offender problems? (2) Do drug courts generate cost savings for the criminal justice system and other public institutions? (3) Are drug courts especially effective or less effective for certain categories of offenders or program characteristics? (4) Which drug court policies and offender perceptions explain their overall impact? The MADCE’s major findings can be summarized as follows:  Drug courts produce statistically significant reductions in self-reported crime. While both the drug court participants and comparison group participants reported large numbers of crimes in the year preceding the 18-month follow-up, drug court participants reported statistically significantly fewer than the comparison group members. Drug court participants were less likely than members of the comparison group to report committing any crimes (40 percent vs. 53 percent) and drug court participants reported committing fewer crimes in the preceding 12 months than comparison group members (43 criminal acts vs. 88 criminal acts). The difference between the two groups in the probability of an official re-arrest over 24 months was not statistically significant, though the percentage of individuals rearrested was lower for the drug court group than the comparison group (52 percent vs. 62 percent), as was the average number of re-arrests (1.24 vs. 1.64).  Drug courts produce statistically significant reductions in drug use. Drug court participants were less likely than members of the comparison group to report using any drugs (56 percent vs. 76 percent) and any serious drugs (41 percent vs. 58 percent), and less likely to test positively for drugs at the 18-month follow-up (29 percent vs. 46 percent). Furthermore, the large difference in self-reported relapse rates is evident at 6 months (40 percent vs. 59 percent), so the impact of drug courts on alcohol and other drug use is sustained. The interview data also indicate that among the drug court participants and comparison group members that were using drugs, the drug court participants, on average, were using them less frequently.  Drug court participants reported some benefits, relative to comparison group members, in other areas of their lives. At 18 months, drug court participants were statistically significantly less likely than comparison group members to report a need for employment, educational, and financial services, and reported statistically significantly less family conflict. However, there were modest, non-significant differences in employment rates, income, and family emotional support, and no differences found in experiencing homelessness or depression.  Regardless of background, most offenders who participated in drug courts had better outcomes than offenders who were in the comparison programs. However, the impact of drug courts was greater for participants with more serious prior drug use and criminal histories, and the impact was smaller for participants who were younger, male, African-American, or who had mental health problems.  While the treatment and service costs were higher for drug court participants than treatment and service costs associated with the alternative “business-as-usual” comparison programs, drug courts save money through improved outcomes, according to the researchers, primarily through savings to victims resulting from fewer crimes and savings resulting from fewer re-arrests and incarcerations. The authors of the study assert that their findings have strong internal validity—that is, that the findings were actually produced by the drug court programs—and external validity—that is, that the findings can be generalized to the population of all drug court participants and potential comparison group members. The claim to strong internal validity is not without merit, given the high response rates, low attrition, propensity score adjustments, and conservative estimates produced by the hierarchical models used. The claim of high internal validity is also supported by the sensitivity analyses undertaken for several outcomes using other models and methods of adjustments that produced little or no change in conclusions. The claim to strong external validity, which relates to the generalizability of the results beyond the sample of courts and comparison sites and specific offenders considered, may be somewhat overstated. The authors note that the 23 drug courts included in the study represent “a broad mix of urban, suburban, and rural courts from 7 geographic clusters nationwide,” but that doesn’t assure that, collectively, the drug courts that were included resemble the hundreds of drug courts that were not included, especially since they were not chosen at random. It also seems unlikely that the six comparison sites from four states are representative of all potential controls, or all alternative programs in all states, and it is potentially problematic that all of the selected sites, including drug court and comparison sites, were alike in their willingness and interest in participating. Those concerns notwithstanding, this is the broadest and most ambitious study of drug courts to date; it is well done analytically; and the results, as they relate to the impact of drug courts, are transparent and well described. The MADCE cost benefit analysis approach differed from most of the other studies we reviewed. In most of the other studies, the average cost and benefit of a drug court participant was compared to the average cost and benefit of normal court processing. In contrast, the MADCE obtained a separate net benefit figure for each individual. The net benefit was obtained by tracking each individual’s use of resources, such as hearings or meetings with case managers, and program outcomes like use of public assistance. The MADCE also tracked each individual’s rates of re- arrest, number of crimes, and time of incarceration. The crimes are multiplied by cost to victims per crime to obtain the cost to society. The difference between the net benefits of the drug court participants and the comparison group were obtained using a hierarchical model similar to the one used for program outcomes. After applying the method, the MADCE found that the drug court participants led to a net benefit of $6,208 to society per participant, as compared to the comparison group. However, due to the variability in the estimate, the study did not find that the net benefits were statistically significant. The lack of a statistically significant difference may be because of greater variability in the MADCE approach than the approach used in other studies. Specifically, the MADCE did not assume identical costs for each participant. As a result, costs may be higher for individuals who have lower rates of re-arrest, perhaps because those individuals received more treatment. According to the study’s authors, by assuming identical costs for each participant, the standard approach understates the variance in the computed net benefit figure by not including the variability in cost. However, the MADCE authors assumed that the prices of services were consistent across sites by using a weighted average. In contrast, some studies generate site-specific cost figures. In this way, the MADCE approach did exclude one source of variation that is present in some other studies. In addition to tracking costs and benefits at the individual level, the MADCE also included some effects of drug court participation that some other studies omit. This is consistent with OMB guidance that states that studies should be comprehensive in the benefits and costs to society considered. One of the benefits considered by the MADCE, sometimes omitted elsewhere, is the estimated earnings of the drug court participant. However, it is unclear that the full value of earnings should have been considered a net benefit to society. For example, to be comprehensive, a study should also consider the cost to society of providing that benefit. The net benefit would account for the value of production from this employment less the wages paid. Although in this case, it is unlikely that this would affect the result of the analysis, as the earnings are similar for drug court participants and the comparison group. To determine what data DOJ collects on the performance of federally funded adult drug courts and to what extent DOJ has used this data in making grant-related decisions, we analyzed the reporting guidance and requirements that BJA provided in fiscal years 2007 through 2011 to grantees applying for Adult Drug Court Discretionary Grant Program funds; BJA-generated grantee performance data reports from October to December 2010; and BJA’s guides for managing grants and enforcing grantee compliance that were issued in fiscal year 2011. We selected 2007 as the starting point for our review because BJA implemented its Performance Measurement Tool (PMT)—an online reporting tool that supports BJA grantees’ ability to collect, identify, and report performance- measurement data activities funded by the grantees’ awards—in fiscal year 2007. We also reviewed our prior reports and internal control standards as well as other academic literature regarding effective performance-management practices. We then used this information and BJA officials’ statements to identify and define six management activities for which performance information can be most useful in making grant- related decisions. Further, we interviewed cognizant BJA officials about the extent to which they use grantees’ performance data when engaging in these management activities, any challenges faced with ensuring grantee compliance, ongoing efforts to revise program performance metrics, and the extent to which BJA’s revisions incorporate best practices we previously identified. To determine what is known about the effectiveness of adult drug courts in reducing recidivism and substance-abuse relapse rates and what the costs and benefits of adult drug courts are, we conducted a systematic review of evaluations of drug-court program effectiveness issued from February 2004 through March 2011 to identify what is known about the effect of drug court programs on the recidivism of and relapse of drug- involved individuals as well as the costs and benefits of drug courts. We also reviewed DOJ’s NIJ-funded MADCE, a 5-year longitudinal process, impact, and cost evaluation of adult drug courts that was issued in June 2011. We identified the universe of evaluations to include in our review using a three-stage process. First, we (1) conducted key-word searches of criminal justice and social science research databases; (2) searched drug court program-related Web sites, such as those of BJA and NADCP; (3) reviewed bibliographies, meta-analyses of drug court evaluations, and our prior reports on drug court programs; and (4) asked drug court researchers and DOJ officials to identify evaluations. Our literature search identified 260 documents, which consisted of published and unpublished outcome evaluations, process evaluations, commentary about drug court programs, and summaries of multiple program evaluations. Second, we reviewed the 260 documents our search yielded and identified 44 evaluations that reported recidivism or substance use relapse rates using either an experimental or quasi-experimental design, or analyzed program costs and benefits. Third, we used generally accepted social science and cost benefit criteria to review the 44 evaluations. To assess the methodological quality of evaluations that reported on recidivism or relapse rates, we placed each evaluation into one of five categories, with category 1 evaluations being the most rigorous and category 5 the least, as outlined in table 7. We excluded studies that were placed in category 5 in the table above or studies in which the comparison group was not drawn from a criminal court. We were left with 33 studies, plus the MADCE, that reported on the effectiveness of 32 drug court programs or sets of programs. As noted in our report, we then grouped the 34 studies, including the MADCE, into two tiers according to their quality category, Tier 1 studies were those that fell into categories 1 or 2, Tier 2 studies were those that fell into categories 3 or 4. Observed differences in recidivism could arise from measured and unmeasured sources of variation between drug court participants and comparison group members. If comparison group members differed systematically from drug court participants on variables that are also associated with recidivism, such as the degree of their substance-abuse addiction problem and these variables were not accounted for by the design or analysis used in the evaluation, then the study could suffer from selection bias wherein observed differences in recidivism could be because of these sources of variation rather than participation in the drug court program. As indicated in table 7, our evaluation of the methods used to deal with selection bias was reflected in the quality categorization of each study. To assess the methodological quality of evaluations that reported on drug court program costs and benefits, we assessed them according to the five criteria we developed and outlined in table 8 below. We determined that an essential criterion for reporting a net benefit of drug courts was that the costs of the drug court were assessed against a baseline (i.e., “business-as-usual” or traditional court processing). Eleven studies met this essential standard and were used to report on program costs and benefits. We excluded other studies not meeting this standard even though they may have met others. To obtain information on our outcomes of interest—that is, recidivism, substance use relapse, and costs and benefits—we used data collection instruments to systematically collect information about the methodological characteristics of each evaluation, the drug court participants and comparison group members studied, and the outcomes of the participants and other comparable groups reported. Each evaluation was read and coded by a senior social scientist, statistician, or economist with training and experience in evaluation research methods. A second senior social scientist, statistician, or economist then reviewed each completed data collection instrument to verify the accuracy of the information included. Part of our assessment also focused on the quality of the data used in the evaluations as reported by the researchers and our observations of any problems with missing data, any limitations of data sources for the purposes for which they were used, and inconsistencies in reporting data. We incorporated any data problems that we noted in our quality assessments. We selected the evaluations in our review based on their methodological strength; therefore, our results cannot be generalized to all drug court programs or their evaluations. Although the findings of the evaluations we reviewed are not representative of the findings of all evaluations of drug court programs, the evaluations consist of those evaluations we could identify that used the strongest designs to assess drug-court program effectiveness. To identify the extent to which DOJ has addressed the recommendations that we made in 2002 regarding drug court programs, we interviewed cognizant DOJ officials and obtained and reviewed documentation (e.g., drug-court program grant solicitations and grantee-performance reporting guidance) on the actions taken to address and implement each of our prior recommendations. We conducted this performance audit from November 2010 through December 2011 in accordance with generally accepted government-auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our objectives. This appendix provides a general description of drug court program components (see table 9). Drug court programs rely on a combination of judicial supervision and substance-abuse treatment to motivate defendants’ recovery. Judges preside over drug court proceedings, which are called status hearings; monitor defendants’ progress with mandatory drug testing; and prescribe sanctions and incentives, as appropriate in collaboration with prosecutors, defense attorneys, treatment providers, and others. Drug court programs can vary in terms of the substance-abuse treatment required. However, most programs offer a range of treatment options and generally require a minimum of about 1 year of participation before a defendant completes the program. Appendix V: Ten Key Components of a Drug Court— Developed by BJA in Collaboration with the National Association of Drug Court Professionals 1. Integration of substance-abuse treatment with justice system case processing. 2. Use of a non-adversarial approach, in which prosecution and defense promote public safety while protecting the right of the participant to due process. 3. Early identification and prompt placement of eligible participants. 4. Access to continuum of treatment, rehabilitation, and related services. 5. Frequent testing for alcohol and illicit drugs. 6. A coordinated strategy governs drug court responses to participants’ compliance. 7. Ongoing judicial interaction with each participant. 8. Monitoring and evaluation to measure achievement of program goals and gauge effectiveness. 9. Continuing interdisciplinary education to promote effective planning, implementation, and operation. 10. Forging partnerships among drug courts, public agencies, and community-based organizations generates local support and enhances drug court program effectiveness. As mentioned, the Adult Drug Court Discretionary Grant Program provides financial and technical assistance to states, state courts, local courts, units of local government, and Indian tribal governments to develop and implement drug treatment courts. There are four different types of awards that BJA makes to adult drug-court grantees through the program. Table 11 provides a description of the grant types. How performance information may be used to support the activity Performance information is used to set priorities in budgeting and to target resources. Agencies can also use this information to identify priorities on which to focus their efforts. For example, targeting grants to address “underserved” client groups. Performance information is used to compare results of agencies’ programs with goals and to identify where program resources should be targeted to improve performance and achieve goals. When faced with reduced resources, such analyses can assist agencies’ efforts to minimize the impact on program results. Performance information is used to assess the way a program is conducted and the extent to which a program’s practices and policies have or have not led to improvements in outcomes. Such information is used to identify problems and consider alternative approaches and processes in areas where goals are not being met and to enhance the use of program approaches and processes that are working well. Performance information is used to identify and increase the use of program approaches that are working well and share these effective processes and approaches with stakeholders. Performance information is used to establish the targets and goals that grantees are expected to achieve. These targets and goals can be used as the basis for corrective action (e.g., technical assistance, freezing of funds) or to reward high performing grantees. Performance information is used to compare grantees’ performance results with established targets and goals to determine the extent to which grantees have met them and, if necessary, target program resources (e.g., technical assistance) to improve grantees’ performance. In addition to the contact named above, Joy Booth, Assistant Director and Frederick Lyles, Jr., Analyst-in-Charge, managed this assignment. Christoph Hoashi-Erhardt, Michael Lenington, and Jerry Seigler, Jr., made significant contributions to the work. David Alexander, Benjamin Bolitzer, Michele Fejfar, and Doug Sloane assisted with design and methodology. Pedro Almoguera, Carl Barden, Harold Brumm, Jr., Jean McSween, Cynthia Saunders, Jeff Tessin, Susan B. Wallace, and Monique Williams assisted with evaluation review. Janet Temko provided legal support, and Katherine Davis provided assistance in report preparation. Bouffard, Jeffrey A., and Katie A. Richardson. “The Effectiveness of Drug Court Programming for Specific Kinds of Offenders: Methamphetamine and DWI Offenders Versus Other Drug-Involved Offenders.” Criminal Justice Policy Review, 18(3) (September 2007): 274-293. Carey, Shannon M., and Michael W. Finigan. Indiana Drug Courts: St. Joseph County Drug Court Program Process, Outcome and Cost Evaluation-Final Report. Portland, OR: NPC Research, 2007. Carey, Shannon M., and Michael W. Finigan. Indiana Drug Courts: Vanderburgh County Day Reporting Drug Court Process, Outcome and Cost Evaluation-Final Report. Portland, OR: NPC Research, 2007. Carey, Shannon M., and Michael W. Finigan. “A Detailed Cost Analysis in a Mature Drug Court Setting: A Cost-Benefit Evaluation of the Multnomah County Drug Court.” Journal of Contemporary Criminal Justice, 20(3) (August 2004): 315-338. Carey, Shannon M., Michael W. Finigan, et. al. Indiana Drug Courts: Monroe County Drug Treatment Court Process, Outcome and Cost Evaluation-Final Report. Portland, OR: NPC Research, 2007. Carey, Shannon M., Michael Finigan, Dave Crumpton, and Mark S. Waller. “California Drug Courts: Outcomes, Costs and Promising Practices: An Overview of Phase II in a Statewide Study.” Journal of Psychoactive Drugs (November 2006). Carey, Shannon M., Lisa M. Lucas, Mark S. Waller, Callie H. Lambarth, Robert Linhares, Judy M. Weller, and Michael W. Finigan. Vermont Drug Courts: Rutland County Adult Drug Court Process, Outcome, and Cost Evaluation-Final Report. Portland, OR: NPC Research, 2009. Carey, Shannon, and Gwen Marchand. Marion County Adult Drug Court Outcome Evaluation-Final Report. Portland, OR: NPC Research, 2005. Carey, Shannon M. and Mark S. Waller. Oregon Drug Court Cost Study: Statewide Costs and Promising Practices-Final Report. Portland, OR: NPC Research, 2010. Carey, Shannon M., and Mark Waller. California Drug Courts: Costs and Benefits-Phase III. Portland, OR: NPC Research, 2008. Carey, Shannon M., and Mark S. Waller. Guam Adult Drug Court Outcome Evaluation-Final Report. Portland, OR: NPC Research, 2007. Dandan, Doria Nour. Sex, Drug Courts, and Recidivism. University of Nevada, Las Vegas: 2010. Ferguson, Andrew, Birch McCole, and Jody Raio. A Process and Site- Specific Outcome Evaluation of Maine’s Adult Drug Treatment Court Programs. Augusta, ME: University of Southern Maine, 2006. Finigan, Michael W., Shannon M. Carey, and Anton Cox. Impact of a Mature Drug Court Over 10 Years of Operation: Recidivism and Costs (Final Report). Portland, OR: NPC Research, 2007. Gottfredson, Denice C., Brook W. Kearley, Stacy S. Najaka, and Carlos M. Rocha. “How Drug Treatment Courts Work: An Analysis of Mediators.” Journal of Research in Crime and Delinquency, 44(1) (February 2007): 3- 35. Gottfredson, Denice C., Brook W. Kearley, Stacy S. Najaka, and Carlos M. Rocha. “Long-term effects of participation in the Baltimore City drug treatment court: Results from an experimental study.” Journal of Experimental Criminology, 2(1) (January 2006): 67-98. Gottfredson, Denice C., Brook W. Kearley, Stacy S. Najaka, and Carlos M. Rocha. “The Baltimore City Drug Treatment Court: 3-Year Self-Report Outcome Study.” Evaluation Review, 29(1) (February 2005): 42-64. Krebs, C.P., C.H. Lindquist, W. Koetse, and P.K. Lattimore. “Assessing the long-term impact of drug court participation on recidivism with generalized estimating equations.” Drug and Alcohol Dependence, 91(1) (November 2007): 57-68. Labriola, Melissa M. The Drug Court Model and Chronic Misdemeanants: Impact Evaluation of the Queens Misdemeanor Treatment Court. New York, NY: Center for Court Innovation, 2009. Latimer, Jeff, Kelly Morton-Bourgon, and Jo-Anne Chrétien. A Meta- Analytic Examination of Drug Treatment Courts: Do They Reduce Recidivism? Ottawa, Ontario: Department of Justice Canada, 2006. Listwan, Shelley Johnson, James Borowiak, and Edward J. Latessa. An Examination of Idaho’s Felony Drug Courts: Findings and Recommendations-Final Report. Kent State University and University of Cincinnati: 2008. Logan, T. K., William H. Hoyt, Kathryn E. McCollister, Michael T. French, Carl Leukefeld, and Lisa Minton. “Economic evaluation of drug court: methodology, results, and policy Implications.” Evaluation and Program Planning, 27 (2004):381–396. Loman, Anthony L. A Cost-Benefit Analysis of the St. Louis City Adult Felony Drug Court. Institute of Applied Research. St. Louis, MO: 2004. Lowenkamp, Christopher T., Alexander M. Holsinger, Edward J. Latessa. “Are Drug Courts Effective: A Meta-Analytic Review.” Journal of Community Corrections. (Fall 2005): 5-28. Mackin, Juliette R., Shannon M. Carey, and Michael W. Finigan. Harford County District Court Adult Drug Court: Outcome and Cost Evaluation. Portland, OR: NPC Research, 2008. Mackin, Juliette R., Shannon M. Carey, and Michael W. Finigan. Prince George’s County Circuit Court Adult Drug Court: Outcome and Cost Evaluation. Portland, OR: NPC Research, 2008. Mackin, Juliette R., Lisa M. Lucas, Callie H. Lambarth, Mark S. Waller, Shannon M. Carey, and Michael W. Finigan. Baltimore City Circuit Court Adult Drug Treatment Court and Felony Diversion Initiative: Outcome and Cost Evaluation-Final Report. Portland, OR: NPC Research, 2009. Mackin, Juliette R., Lisa M. Lucas, Callie H. Lambarth, Mark S. Waller, Theresa Allen Herrera, Shannon M. Carey, and Michael W. Finigan. Howard County District Court Drug Treatment Court Program Outcome and Cost Evaluation. Portland, OR: NPC Research, 2010. Mackin, Juliette R., Lisa M. Lucas, Callie H. Lambarth, Mark S. Waller, Theresa Allen Herrera, Shannon M. Carey, and Michael W. Finigan. Montgomery County Adult Drug Court Program Outcome and Cost Evaluation. Portland, OR: NPC Research, 2010. Mackin, Juliette R., Lisa M. Lucas, Callie H. Lambarth, Mark S. Waller, Theresa Allen Herrera, Shannon M. Carey, and Michael W. Finigan. Wicomico County Circuit Court Adult Drug Treatment Court Program Outcome and Cost Evaluation. Portland, OR: NPC Research, 2009. Mackin, Juliette R., Lisa M. Lucas, Callie H. Lambarth, Mark S. Waller, Judy M. Weller, Jennifer A. Aborn, Robert Linhares, Theresa L. Allen, Shannon M. Carey, and Michael W. Finigan. Baltimore City District Court Adult Drug Treatment Court: 10-Year Outcome and Cost Evaluation. Portland, OR: NPC Research, 2009. Marchand, Gwen, Mark Waller, and Shannon M. Carey. Barry County Adult Drug Court Outcome and Cost Evaluation-Final Report. Portland, OR: NPC Research, 2006. Marchand, Gwen, Mark Waller, and Shannon M. Carey. Kalamazoo County Adult Drug Treatment Court Outcome and Cost Evaluation-Final Report. Portland, OR: NPC Research, 2006. Marinelli-Casey, Patricia, Rachel Gonzales, Maureen Hillhouse, Alfonso Ang, Joan Zweben, Judith Cohen, Peggy Fulton Hora, and Richard A. Rawson. “Drug court treatment for methamphetamine dependence: Treatment response and posttreatment outcomes.” Journal of Substance Abuse Treatment, 34(2) (March 2008): 242-248. Mitchell, Ojmarrh, and Adele Harrell. “Evaluation of the Breaking the Cycle Demonstration Project: Jacksonville, FL and Tacoma, WA.” Journal of Drug Issues, 36(1) (Winter 2006): 97-118. Piper, R. K., and Cassia Spohn. Cost/Benefit Analysis of the Douglas County Drug Court. Omaha, NE: University of Nebraska at Omaha, 2004. Rhodes, William, Ryan Kling, and Michael Shively. Suffolk County Drug Court Evaluation. Abt. Associates, Inc., 2006. Rhyne, Charlene. Clean Court Outcome Study. Portland, OR: Multnomah County Department of Community Justice, 2004. Rossman, S., M. Rempel, J. Roman, et.al. The Multi-Site Adult Drug Court Evaluation: The Impact of Drug Courts. Washington, D.C.: Urban Institute, 2011. Shaffer, Deborah K., Kristin Bechtel, and Edward J. Latessa. Evaluation of Ohio’s Drug Courts: A Cost Benefit Analysis. Cincinnati, OH: Center for Criminal Justice Research, University of Cincinnati, 2005. Wilson, David B., Ojmarrh Mitchell, and Doris L. Mackenzie. “A systematic review of drug court effects on recidivism.” Journal of Experimental Criminology, 2(4) (2006): 459-487. Zarkin, Gary A., Lara J. Dunlap, Steven Belenko, and Paul A. Dynia. “A Benefit-Cost Analysis of the Kings County District Attorney’s Office Drug Treatment Alternative to Prison (DTAP) Program.” Justice Research and Policy, 7(1) (2005).
A drug court is a specialized court that targets criminal offenders who have drug addiction and dependency problems. These programs provide offenders with intensive court supervision, mandatory drug testing, substance-abuse treatment, and other social services as an alternative to adjudication or incarceration. As of June 2010, there were over 2,500 drug courts operating nationwide, of which about 1,400 target adult offenders. The Department of Justice's (DOJ) Bureau of Justice Assistance (BJA) administers the Adult Drug Court Discretionary Grant Program, which provides financial and technical assistance to develop and implement adult drug-court programs. DOJ requires grantees that receive funding to provide data that measure their performance. In response to the Fair Sentencing Act of 2010, this report assesses (1) data DOJ collected on the performance of federally funded adult drug courts and to what extent DOJ used these data in making grant- related decisions, and (2) what is known about the effectiveness of drug courts. GAO assessed performance data DOJ collected in fiscal year 2010 and reviewed evaluations of 32 drug- court programs and 11 cost-benefit studies issued from February 2004 through March 2011. BJA collects an array of data on adult drug-court grantees, such as drug-court completion rates, and during the course of GAO's review, began expanding its use of this performance data to inform grant-related decisions, such as allocating resources and setting program priorities. For example, during September 2011, BJA assessed a sample of adult drug-court grantees' performance across a range of variables, using a new process it calls GrantStat. BJA developed recommendations following this assessment and is determining their feasibility. In addition, in October 2011, BJA finalized revisions to the performance measures on which grantees report. BJA's process of revising its performance measures generally adhered to key practices, such as obtaining stakeholder involvement; however, BJA could improve upon two practices as it continues to assess and revise measures in the future. First, while BJA plans to assess the reliability of the new measures after the first quarter of grantees' reporting, officials have not documented, as suggested by best practices, how it will determine if the measures were successful or whether changes would be needed. Second, should future changes to the measures be warranted, BJA could improve the way it documents its decisions and incorporates feedback from stakeholders, including grantees, by recording key methods and assumptions used to guide its revision efforts. By better adhering to best practices identified by GAO and academic literature, BJA could better ensure that its future revision efforts result in successful and reliable metrics--and that the revision steps it has taken are transparent. In the evaluations that GAO reviewed, drug-court program participation was generally associated with lower recidivism. GAO's analysis of evaluations reporting recidivism data for 32 programs showed that drug-court program participants were generally less likely to be re-arrested than comparison group members drawn from criminal court, with differences in likelihood reported to be statistically significant for 18 of the programs. Cost-benefit analyses showed mixed results. For example: (1) Across studies showing re-arrest differences, the percentages of drug- court program participants re-arrested were lower than for comparison group members by 6 to 26 percentage points. Drug court participants who completed their program had re-arrest rates 12 to 58 percentage points below those of the comparison group. (2) GAO's analysis of evaluations reporting relapse data for eight programs showed that drug-court program participants were less likely than comparison group members to use drugs, based on drug tests or self- reported drug use, although the difference was not always significant. (3) Of the studies assessing drug-court costs and benefits, the net benefit ranged from positive $47,852 to negative $7,108 per participant. GAO recommends that BJA document key methods used to guide future revisions of its performance measures for the adult drug-court program. DOJ concurred with GAO's recommendation.
Under authority of the Inspector General Act of 1978, the Defense Criminal Investigative Service (DCIS) and the military criminal investigative organization within each of the services investigate alleged procurement fraud. NCIS has primary responsibility for investigating alleged procurement fraud affecting the Navy. Within the Department of Justice, the Federal Bureau of Investigation (FBI) investigates fraud. Each of these investigating agencies provides evidence to support the prosecuting authorities. Between January 1989 and July 1996, NCIS agents participated in over 114,000 criminal investigations. In March 1997, 113 NCIS fraud agents were involved in the investigation of 811 cases for crimes such as antitrust violations, cost mischarging, product substitution, and computer intrusion. Although NCIS agents generally investigate procurement fraud cases independently, investigative jurisdiction in 320 of the 811 cases, or about 39 percent, was shared with DCIS, FBI, and other military or civilian criminal investigative organizations. Agents interview individuals to obtain evidence in criminal investigations. An interview is the formal questioning of an individual who either has or is believed to have information relevant to an investigation. Interviews are normally conducted with willing witnesses and informants. An interrogation is a special type of interview that has an added purpose of securing an admission or confession of guilt regarding the commission or participation in the crime or obtaining pertinent knowledge regarding the crime. Interrogations are normally conducted with suspects or unwilling witnesses. According to NCIS officials, most testimonial evidence in fraud cases is acquired through interviews; however, policies covering areas such as agent demeanor and the display of weapons are the same whether the format of questioning is an interview or interrogation. Over the years, allegations have been made regarding the use of inappropriate interview techniques by NCIS agents when questioning suspects and witnesses. In January 1995, a Department of Defense (DOD) advisory board, commissioned by the Secretary of Defense to review criminal investigations within the agency, reported that it had heard complaints of abusive interview techniques by NCIS agents. In its report, the advisory board noted that several defense attorneys suggested that subjects should be provided with additional protection against potential abuses by requiring that all interviews be videotaped. NCIS interview policies are consistent with those of both DCIS and FBI. Generally, policies of all three agencies seek to ensure that interviews of witnesses and suspects are done in a professional manner without the use of duress, force, and physical or mental abuse. More specifically, these policies prohibit agents from making promises or threats to gain cooperation; using deceit, which courts could view as overcoming an interviewee’s free will; or indiscriminately displaying weapons. A detailed comparison of the policies is in appendix I. To ensure that constitutional rights are not violated, NCIS, DCIS, and FBI policies elaborate on the rights of individuals as witnesses and suspects and provide guidance and direction to agents. For example, NCIS policies emphasize that both military and civilian suspects must be informed that they have a right to remain silent and to consult with an attorney and that any statement made may be used against them. In addition, NCIS policies address an individual’s right to have counsel present and to terminate the interview at any time. Under 10 U.S.C. 1585 and DOD Directive 5210.56, civilian officers and DOD employees may carry firearms while on assigned investigative duties. NCIS and DCIS policies authorize agents, unless otherwise prohibited, to carry firearms when conducting criminal investigations. FBI policies also require agents to be armed when on official duty. NCIS, DCIS, and FBI policies do not specifically prohibit carrying firearms during interviews. NCIS agents told us that they usually carry weapons during interviews because of the organization’s policy requiring that firearms be carried when conducting criminal investigations. However, NCIS policy states that agents should avoid any unnecessary reference to the fact that they are carrying a firearm. In March 1996 correspondence to all NCIS agents, NCIS Headquarters noted that references to the carrying of a firearm include not only verbal, but also physical references, including display of the firearm. DCIS and FBI policies also prohibit the careless display of firearms in public. NCIS policy states that, unless unusual conditions prevail, an agent should not be armed during an interrogation and that the presence of two agents is preferable. NCIS fraud agents told us that, unlike witness interviews, which are typically held at a home or place of employment, formal interrogations of suspects in general crime cases are usually held in a controlled environment in an NCIS field office or a custodial environment, such as a jail. Procurement fraud investigations are usually very long, the target of the investigation is known early in the investigation and has normally obtained legal counsel, and an Assistant U.S. Attorney communicates directly with the suspect’s counsel. Interrogations in procurement fraud cases are rare due to the nature of the investigation. NCIS, DCIS, and FBI policies also address agent ethics, conduct, and demeanor during interviews. For example, NCIS policy states that interviews should be conducted in a business-like manner. DCIS policy likewise notes that, when conducting an interview, the agent should maintain a professional demeanor at all times and protect the rights of persons involved in a case, as well as protect himself or herself from allegations of misconduct. The FBI has similar policies regarding agent conduct and demeanor during interviews. NCIS requires an investigation of allegations of agent misconduct. Between January 1989 and July 1996, the NCIS Office of Inspections investigated 304 allegations against agents. However, only 10 cases involved agent conduct during the interview process, and none involved cases of procurement fraud. Corrective actions, ranging from required counseling to job termination, were taken against NCIS agents in the six cases that were substantiated. DOD and NCIS have also established controls to protect individual rights and act as deterrents to inappropriate agent conduct during interviews. These controls include basic and continued agent training; a field office inspection program; and DOD Inspector General oversight of NCIS investigations, including alleged misconduct by agents. The judicial review inherent in the legal process also acts as a deterrent to inappropriate agent behavior. NCIS agents receive considerable training on interview techniques and appropriate interview behavior. At the basic agent course given at the Federal Law Enforcement Training Center, NCIS agents receive 18 hours of instruction concerning interviewing techniques. During their first 24 months with the agency, agents are exposed to a wide range of general crime investigations as they work with and are evaluated by more experienced agents. After the first 24-month period, selected agents are given the opportunity to specialize in procurement fraud investigations. Additional procurement fraud-specific training, both internal and external, and additional interview training is given throughout an agent’s career. The internal and external training is supplemented by correspondence issued periodically to agents on various subjects, including interviewing techniques, updates on policy or procedural changes as a result of court cases, or lessons learned from completed investigations. The 23 dedicated fraud agents we interviewed at NCIS field offices in Los Angeles and Washington, D.C., had been with NCIS for an average of 12 years and had worked in the fraud area for an average of 6-1/2 years. NCIS conducts regular operational inspections of headquarters and field locations. Two objectives of the inspections are to assess compliance with established policies and procedures and evaluate anomalies that prevent or inhibit compliance. NCIS guidelines require that these inspections include interviews with all agents and supervisors and a review of all ongoing case files and correspondence. In addition, inspections may include interviews with selected Assistant U.S. Attorneys, military prosecutors, and managers and agents of other federal criminal investigative agencies with whom NCIS agents work. Within 45 days of receipt of the inspection report, the special agent-in-charge of the field location is to report on actions taken, in progress, or proposed to correct all recommendations made during the inspection. Between January 1992 and December 1996, NCIS conducted 45 of these inspections. Our review of inspection reports for all 11 inspections conducted during the 3-year period ending December 1996, found no indications of problems with agent conduct regarding interviews. The Inspector General Act of 1978 gives the DOD Inspector General the responsibility for oversight of investigations performed by the military criminal investigative organizations, including NCIS. During the last 4 years, the DOD Inspector General completed oversight reviews of 29 NCIS cases involving allegations of misconduct against 11 NCIS agents. The Inspector General determined that none of these allegations were substantiated. In April 1996, the Secretary of Defense requested that the DOD Inspector General look into allegations of NCIS agent misconduct during a 4-year procurement fraud investigation that ended in acquittal of the two defendants in early 1995. At the time of our review, the inquiry into these allegations had not been completed. U.S. Attorneys and other prosecuting authorities rely on the results of NCIS investigations to be upheld in the courts. Under rights afforded under the fifth amendment to the U.S. Constitution and Article 31 of the Uniform Code of Military Justice, evidence acquired in violation of the rights of the accused can be inadmissible. Defendants and their attorneys have the right to petition the courts to suppress or exclude any evidence not legally obtained. In addition, civilian witnesses and suspects can bring civil suits against agents if they believe their rights have been violated or laws have been broken. According to the Navy’s General Counsel, once a case is accepted for prosecution in federal court, the Assistant U.S. Attorney assumes responsibility for the investigation and determines the need for further investigation, the witnesses who will be interviewed, and the timetable for referring the case to the grand jury for indictment. Thus, the Assistant U.S. Attorney closely monitors the information obtained for its admissibility. We interviewed nine Assistant U.S. Attorneys, all of whom had many years of experience in working with NCIS agents. They characterized the NCIS agents as professional and could not recall any instances in which evidence was suppressed or cases were negatively impacted as a result of misconduct by NCIS agents during interviews. Some of the attorneys said they had attended interviews with NCIS fraud agents and observed nothing that was out of line. NCIS, DCIS, and FBI policies permit audio or video recordings of witness or suspect interviews in significant or controversial cases. However, little support exists for routine taping of interviews, except in particular kinds of cases. In fiscal year 1996, NCIS agents videotaped 56 interviews and 23 interrogations, 51 (or 65 percent) of which involved child abuse cases. Most of the remaining videotapings involved cases of assaults, homicides, and rapes. NCIS fraud agents said that they audiotape very few interviews. Neither DOD nor the Department of Justice favor routinely audio- or videotaping interviews. Both agencies believe that such a practice would not improve the quality of investigations or court proceedings and that the resources necessary to institute such a practice could be better used elsewhere. In its 1995 report, DOD’s advisory board recognized that routine videotaping of interviews is a topic of debate within the law enforcement community. However, the board concluded that videotaping was unnecessary in all cases since its study found no widespread abuse of subjects’ rights, but it might be advisable under some circumstances. The Navy’s General Counsel, NCIS agents, and the Assistant U.S. Attorneys we spoke with expressed concern regarding the routine recording of interviews. They consider routine recording to be unnecessary because the courts do not require it; the practice would take time better used for more productive activities; and, given the large volume of cases, such recordings would be cost-prohibitive and add little value to the process. The Assistant U.S. Attorneys stressed that grand jury hearings and court proceedings are the most appropriate places to obtain testimonial evidence, since witnesses are under oath. NCIS agents and the Assistant U.S. Attorneys we spoke with favored the current NCIS policy of interviews being taped only when a specific reason exists for doing so. The attorneys favored recording interviews of small children in child abuse cases to preclude multiple interviews and possibly the need for the children to appear in court. The agents and attorneys also favored recording witnesses who were likely to be unavailable during court proceedings and those that might be expected to change their story. Officials told us that an NCIS pilot test of videotaping all interviews in the early 1970s did not support routine use because (1) the agents found that they were devoting disproportionate time and energy to the care of equipment rather than gathering facts; (2) the number and breadth of interviews declined, as did the overall quality of investigations; and (3) investigators’ productivity decreased due to their inability to conduct a sufficient number of in-depth interviews. NCIS had not computed the additional cost of taping all interviews. However, the Navy’s General Counsel noted that the expense of equipment, tapes, transcription, and duplication would be extremely high and could only be justified if no safeguards were already built into the legal system. As an example of the potential transcription cost that could be incurred, we were told that, in one case that was recorded, the interview lasted about 3 hours, filled 4 microcassettes, and ended up being 127 single-spaced typed pages. Information provided by the NCIS Los Angeles field office, one of the larger offices for procurement fraud cases, showed that about 7,600 interviews had been completed for the 117 cases assigned as of January 1997, which translates to an average of about 65 interviews per case. According to officials of the NCIS Washington, D.C., field office, 16 major procurement fraud cases that were essentially completed and awaiting some type of disposition had required 628 interviews—an average of about 39 interviews per case. NCIS closed 533 procurement fraud cases in fiscal year 1995 and 534 in fiscal year 1996. A 1990 study commissioned by the Department of Justice sought to determine the use of audio- and videotaping of interrogations by police and sheriff departments nationwide. The study concluded that videotaping was a useful tool and that one-third of the departments serving populations of 50,000 or more videotaped suspect interrogations and confessions in cases involving violent crime. The benefits claimed by the departments that taped interrogations and confessions included (1) better interrogations because agents prepared more extensively beforehand, (2) easier establishment of guilt or innocence by prosecutors, and (3) increased protection of subjects’ rights against police misconduct. Local prosecutors tended to favor videotaping, but defense attorneys had mixed feelings. NCIS has no written policy that specifically addresses whether recordings or written transcriptions of interviews should be made available on demand to the subject or witness. However, NCIS, DCIS, and FBI policies regarding witness statements and confessions do not prohibit copies from being given to the individual making the statement. Also, a 1993 NCIS memorandum said that all witness statements must be provided to the defense counsel and that quotes from a witness are to be considered witness statements. The Assistant U.S. Attorneys we spoke with and NCIS officials believe that written transcripts of audio or video recordings, especially those taken during the early stages of an investigation, would not necessarily reflect all the known facts and might be misleading and subject to inappropriate use. Currently, interview writeups are not provided to witnesses or suspects for their review, since they are considered a summary of the interview results from the agent’s perspective. According to the Navy’s General Counsel, much of the information in interview writeups is likely to be irrelevant to the case after the issues are narrowed. This official also said that the potential increase in the accuracy of individual interviews would not contribute as much to the total accuracy of an investigation as verifying or disproving the information provided in initial interviews. DOD and the Department of Justice reviewed a draft of this report. The Department of Justice provided informal comments, which we incorporated as appropriate. DOD concurred with our findings. We interviewed officials responsible for fraud investigations at NCIS, DCIS, and FBI headquarters to identify policies and procedures relating to interviewing suspects and witnesses. We focused on the policies and procedures concerning agent conduct and demeanor, the carrying and display of weapons during interviews, and use of audio- and videotaping. To document actual NCIS interview practices, we interviewed fraud case supervisors and agents at the two NCIS field offices responsible for the highest number of closed procurement fraud investigations in fiscal years 1995 and 1996—Los Angeles and Washington, D.C. To determine whether NCIS policies are in line with generally accepted federal law enforcement standards, we compared NCIS interview policies, especially with regard to agent conduct and demeanor and the carrying and display of weapons, with those of DCIS and FBI—two of the larger federal law enforcement agencies involved in procurement fraud investigations. We also reviewed the Federal Law Enforcement Training Center’s and NCIS internal training curriculum on interviews. In addition, we reviewed agent training records and discussed interview training with instructors at the Federal Law Enforcement Training Center and NCIS fraud supervisors and agents. To address agent adherence to guidance and identify controls in place to deter inappropriate agent conduct and demeanor during interviews, we interviewed NCIS headquarters officials and the Navy’s General Counsel. Through discussions and document reviews, we compared these controls with those of DCIS and FBI. We reviewed cases of alleged agent misconduct investigated internally by NCIS’ Office of Inspections and externally by the DOD Inspector General. We also reviewed and documented the results of the 11 operational inspections of NCIS field offices conducted since January 1994. In addition, we reviewed summaries of all NCIS procurement fraud cases closed during fiscal years 1995 and 1996. Regarding oversight of NCIS, we interviewed DOD Inspector General officials responsible for the oversight of NCIS investigative activities and examined cases of alleged NCIS agent misconduct that received oversight by the DOD Inspector General. We also reviewed documents regarding Navy policies and interviewed the Navy’s General Counsel and the Navy’s Principal Deputy General Counsel. The Assistant U.S. Attorneys we spoke with provided us with insight regarding the adequacy of policies and laws dealing with subject and witness interviews and the performance of NCIS agent interviewing practices, especially with regard to impact on the prosecution of procurement fraud cases. We discussed with NCIS and DCIS managers, NCIS agents, and Assistant U.S. Attorneys, the use of audio and video equipment to tape interviews and the desirability and feasibility of providing the transcripts to witnesses and subjects. We obtained the official positions of the Department of Justice and NCIS regarding these issues. We identified two studies that addressed using audio- and videotaping for recording interviews and discussed these issues with the studies’ authors. We also discussed these issues with homicide detectives from one city police department that uses video equipment in interrogations. In addition, we discussed with appropriate DOD and Department of Justice officials any legal and practical ramifications of interviews being taped and transcriptions being provided to witnesses and suspects. We performed our work from July 1996 to March 1997 in accordance with generally accepted government auditing standards. We are sending copies of this report to other interested congressional committees; the Secretaries of Defense and the Navy; the General Counsel of the Navy; the Director of the Naval Criminal Investigative Service; and the Attorney General. Copies will also be made available to others on request. Please contact me at (202) 512-5140 if you or your staff have any questions concerning this report. Major contributors to this report are William E. Beusse, Hugh E. Brady, Kenneth Feng, Mark Speight, and Harry Taylor. Agents are required to carry firearms while on assigned investigative duties. Agents must carry firearms when conducting criminal investigations, except where prohibited or when carrying a firearm is inappropriate. Agents must be armed when on official duty, unless good judgment dictates otherwise. They are authorized to be armed anytime. Any unnecessary reference to the fact that an agent has a firearm on his or her person should be avoided. An agent should not be armed during an interrogation unless unusual conditions prevail. It is better to have two agents present than to be armed. Normally, agents may be armed during interviews because the policy requiring them to be armed while on investigative duties prevails. Area is not specifically addressed, but unnecessary display of firearms, which may heighten the sensitivity of non-law enforcement personnel, is prohibited. In addition, careless display of firearms in public is prohibited. Area is not specifically addressed, but unnecessary display of weapons in public is prohibited. Good judgment must be exercised in all situations. Military suspects must not be interrogated without having first been given the prescribed warning. For civilian suspects, Miranda warnings are applicable in custodial situations, and informing individuals of their right to terminate the interview at any time is required. In addition to the obligation to give the suspect the required warnings, agents are required to be familiar with civil and criminal laws and the Uniform Code of Military Justice so they can recognize an incriminating statement. In addition to the obligation to give the suspect the required warnings, the policies state that the suspect must be advised of the names and official identities of the interviewing agents and the nature of the inquiry. It is desirable that the suspects acknowledgement of the warnings be obtained in writing. Agents do not have the authority to make any promises or suggestions of leniency or more severe action to induce a suspect to make a statement. Agents must refrain from making or implying promises of benefits or rewards or threats of punishments to unlawfully influence the suspect. No attempt is to be made to obtain a statement by force, threats, or promises. Whether a suspect will cooperate is left entirely to the individual. The policies take into account that the court will decide whether the interrogation practices overpowered the accused’s ability of self-determination. (continued) Although tricks or other tactics may not be used to prevent a suspect from exercising constitutional rights, once a suspect makes a valid waiver of rights, deceptions are allowable as long as they are not used to obtain an untrue confession. Playing one suspect against another is an allowable interrogation technique. However, agents must ensure that information developed conforms to rules regarding admissibility of evidence and that the rights of persons involved in a case are protected. The presence of trickery, ruse, or deception will not necessarily make a statement involuntary. The courts consider a number of factors in making this determination, including whether the statement resulted from a free and unconstrained choice or from interrogation practices that overpowered the individual’s ability of self-determination. Interrogations should be conducted in a business-like and humane manner. Legal restrictions are based on the premise that a person will make false statements to stop any physical or mental discomfort. Agents should be friendly and business-like and maintain a professional demeanor at all times. Agents should also be receptive and sympathetic. Policies prohibit any tactics that may be considered coercive by courts, stressing that tactics that overpower a suspect’s ability of self-determination should not be used. Recommended for interviews considered to be potentially significant or controversial but only with the knowledge and concurrence of the interviewee. Recommended for compelling situations with approval from the interviewee, the head of the DCIS field office, and prosecutor. Authorized on a limited, selective basis with approval of the special agent-in-charge and consent of the interviewee. In addition, recording equipment must be in plain view of the interviewee, tapes must not be edited or altered, and the chain of custody must be ensured. No policy. When the individual making a statement asks for a copy, one will be provided. However, prior approval for doing so must be obtained from the cognizant U.S. Attorney or military Staff Judge Advocate, as appropriate. Agents should not volunteer to furnish a copy of a confession or signed or unsigned statement to the subjects or their attorneys. However, if the confession or statement is requested and certain conditions are met, it should be provided. No policy. A determination is made on a case-by-case basis by the U.S. Attorney. No policy. A determination is made on a case-by-case basis by the U.S. Attorney. No policy. A determination is made on a case-by-case basis by the U.S. Attorney. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO reviewed the Naval Criminal and Investigative Service's (NCIS) policies and practices regarding agent interviews of suspects and witnesses during procurement fraud investigations, focusing on: (1) NCIS's policies on interviewing, including agent conduct and demeanor and the carrying and display of weapons; (2) controls to deter inappropriate conduct by agents; and (3) the desirability and feasibility of audio- or videotaping interviews and making the recording or transcription available to the person interviewed. GAO noted that: (1) according to federal law enforcement experts, NCIS interview policies are in accordance with generally accepted federal law enforcement standards and applicable laws; (2) specifically, NCIS interview policies prohibit the indiscriminate display of weapons or the use of threats, promises, inducements, or physical or mental abuse by agents attempting to influence an individual during interviews; (3) NCIS has established controls to deter, detect, and deal with agent misconduct; (4) NCIS agents are trained in interview policies at their initial training at the Federal Law Enforcement Training Center and through in-house and contractor training; (5) other controls include periodic inspections of NCIS field offices, internal investigations of alleged agent misconduct, oversight of cases and allegations of agent misconduct by the Department of Defense (DOD) Inspector General, and the involvement of the U.S. Attorney's offices in grand jury investigations and prosecutions; (6) furthermore, judicial review of evidence presented also acts as a deterrent to inappropriate agent conduct since inappropriate or illegal behavior may result in the evidence obtained not being admissible in court; (7) the DOD Inspector General and NCIS could identify only six cases since January 1989 in which misconduct was substantiated, and none of those cases involved procurement fraud investigations; (8) NCIS policies do not prohibit audio- or videotaping of interviews or distributing the written or taped results to the interviewee; (9) the NCIS does not routinely tape interviews; (10) officials from NCIS, the Defense Criminal Investigative Service, the Federal Bureau of Investigation, and selected Assistant U.S. Attorneys did not support the idea of routinely taping interviews; (11) NCIS considers routine taping of interviews to be unjustified, given the equipment and transcription costs and the large volume of interviews associated with procurement fraud investigations; (12) DOD and Department of Justice officials noted that routine audio- or videotaping would not improve the quality of the investigation or court proceedings; and (13) the DOD advisory board agreed that the routine taping of interviews was unnecessary, given the lack of evidence supporting a widespread abuse of subjects' rights by agents from military criminal investigative organizations.
Congress passed the Defense Acquisition Workforce Improvement Act (DAWIA) in 1990 to address issues related to workforce quality, to formally establish the acquisition workforce, and to increase its professionalism by directing DOD to create certification requirements for the acquisition workforce. In response, DOD defined its acquisition workforce, which evolved into the 16 career fields and paths that currently exist. According to DAU officials, this definition is still evolving. For each of the career fields and paths that DOD established, there are minimum requirements for education, experience, and training under DAWIA. The DAWIA workforce numbered 133,103 at the end of fiscal year 2009 and 150,566 at the end of March 2011. In 2010, DOD developed a Defense Acquisition Workforce Improvement Strategy to establish a comprehensive acquisition workforce analysis and decision-making capability that is still ongoing. The workforce analysis is focused on the DAWIA workforce and does not cover non-DAWIA personnel with acquisition-related responsibilities despite recognition of the important roles they play in acquiring services in the federal government. The number of personnel and roles on services acquisitions can vary greatly. With the exception of DAWIA-certified contracting officers, who administer services acquisitions and are involved throughout the life cycle of a contract, other professionals do fall outside of DAWIA. A model of the services acquisition process is demonstrated in figure 1 below along with the roles of personnel who may be involved in the various stages throughout the life cycle of services acquisitions. Manage and assess contractor performance Requirements official(s) Source selection board member Contracting officer’s representative(s) Technical assistant(s) GAO, Defense Acquisitions: Tailored Approach Needed to Improve Service Acquisition Outcomes, GAO-07-20 (Washington, D.C.: Nov. 9, 2006). complex investments that did not have well-defined requirements, a complete set of measurable performance standards, or both. The Office of Management and Budget’s Office of Federal Procurement Policy (OFPP) issued guidance in 2005 that built on previous efforts to improve the development of the acquisition workforce by defining the acquisition workforce more broadly than DOD’s definition under DAWIA. The OFPP policy applies to all executive agencies, except those subject to DAWIA. OFPP’s definition includes individuals who perform various acquisition functions to support accomplishing an agency’s mission. At a minimum, the acquisition workforce of a civilian agency includes contracting specialists, contracting officers regardless of general schedule series, contracting officers’ representatives or equivalent positions, program and project managers, positions in the purchasing series, and any significant acquisition positions identified by the agency. Members of the civilian acquisition workforce may also include: individuals substantially involved in defining, determining, and managing requirements; individuals involved in acquisition planning and strategy; individuals who participate in the contracting process (including soliciting, evaluating, and awarding of contracts); individuals who manage the process after the contract is awarded (including testing and evaluating; managing, monitoring, and evaluating performance on the contract; auditing; and administering the contract); individuals involved in property management; individuals who support the business processes of the above listed activities (e.g., General Counsel, finance, or other subject matter experts); and individuals who directly manage those involved in any of the above activities. Non-DAWIA personnel are assigned responsibilities in critical phases of the acquisition process, but no DOD organization has systematically identified these personnel and the acquisition-related competencies they require or been designated the responsibility of overseeing this group—as has been done for the personnel who are members of the DAWIA workforce. In our sample of 29 service contracts, we determined that the number of non-DAWIA personnel with acquisition-related responsibilities was substantial. Identifying non-DAWIA personnel with acquisition-related responsibilities is challenging, but DOD is working to identify a portion of this population—requirements personnel for major weapon systems—and provide specific training. DOD identified 218 of the 430 personnel (51 percent) reported to us as involved in the 29 contracts in our sample as outside the DAWIA workforce. While the absolute number is large, their acquisition-related responsibilities are generally part-time, according to DOD officials. Nonetheless, their roles and responsibilities touched all three phases of the services acquisition life cycle and included personnel with such titles as program managers, CORs, requirement officials, auditors, and legal advisors. DAU has acknowledged that non-DAWIA personnel with acquisition-related responsibilities may also include technical experts, financial managers, and others whose duties may affect or be affected by the acquisition process. According to senior DOD officials, DOD policy does not require tracking or training for these non-DAWIA personnel with acquisition-related responsibilities, but they are assigned responsibilities in critical phases of the acquisition process—acquisition planning, contract solicitation and award, and contract administration. Decisions about the number and type of personnel involved in each individual contract are made at the discretion of the organization responsible for the contract and may vary widely from contract to contract depending on the type of acquisition and the service or command. For 23 of the 29 contracts we reviewed, DOD officials identified non-DAWIA personnel with acquisition-related responsibilities working on the contract. The number of non-DAWIA personnel with acquisition-related responsibilities reported in our sample ranged from 61 on one Navy contract to none on two different DLA contracts. For two similar Air Force contracts involving aircraft maintenance, one reported 21 non-DAWIA personnel with acquisition-related responsibilities involved in the contract, and the other reported 3. According to an Air Force contracting officer, the number of CORs associated with a contract can vary depending on the experience and skills needed to monitor the work being performed by the contractor. Additionally, variation among personnel identified on the contracts is also a result of personnel turnover, which may impact the overall number of non-DAWIA personnel with acquisition-related responsibilities identified on a particular contract. Based on our sample of 29 contracts, we identified 12 categories of personnel that have acquisition-related roles and responsibilities but are not part of the DAWIA workforce. Figure 2 shows the number of non- DAWIA personnel with acquisition-related responsibilities in each of the 12 categories we identified based on DOD data, titles, and policy. See appendix II for a description of the 12 categories and acquisition-related responsibilities associated with them. In some cases, DOD reported personnel as serving in more than one role on the contract. For example, a COR was also reported as serving as a program manager—who is the principal technical expert usually most familiar with the requirements. In another example, a multifunctional team member—who plans and manages services acquisitions throughout the life of the requirement— was reported as also being the functional commander, who is the senior official of a requirements organization. Figure 2 below eliminates the multiple roles as we included the individual in a specific role identified in DOD’s guidebook, such as a COR, over a role within a group, such as a member of a multifunctional team. In addition, DOD identified personnel with acquisition-related responsibilities who had titles such as technical assistants, assistant CORs, and task managers who were not designated as the COR. This group along with CORs represented the vast majority of personnel on our 29 contracts. We were able to collect data on the non-DAWIA population from individual commands and contracting organizations on a contract-by- contract basis, but no organization within DOD is responsible for identifying, developing, and managing non-DAWIA personnel with acquisition-related responsibilities—even though these personnel represented over half the people reported as working on the service contracts we reviewed. DOD is not required to identify non-DAWIA personnel with acquisition-related responsibilities, and senior officials stated that DOD has not established criteria or a process to do so across the department or among organizations in DOD that have a role in helping to manage issues focused on services acquisitions. For the DAWIA population, however, organizations within DOD—including DAU, the Directors of Acquisition Career Management (DACMs), and the Functional Integrated Process Teams (FIPTs)—have integrated tracking responsibilities that allow DOD to strategically manage this population.8 DAU officials explained that in keeping with their mission and priority, they focus their resources on DAWIA professionals. According to DOD officials, the mission of each military service’s DACM is to track personnel covered under DAWIA and identify demand for training. FIPTs were established for 14 different acquisition career fields for the DAWIA workforce. The FIPT lead advises DOD on DAWIA career development policies and procedures, including education, training, and experience requirements for civilian and military personnel in the acquisition workforce. They also, in conjunction with the DACMs, identify demand for training. as the medical monitor to review the research proposal to help ensure the safety of the study participants. However, this official’s primary duty in the Air Force was an active duty flight surgeon. DOD officials stated that acquisition personnel may serve in both DAWIA and non-DAWIA positions at different points in their DOD careers, further complicating attempts to identify or track personnel. In the 29 contracts we reviewed, we found several examples of personnel serving in the same role with the same responsibilities—such as requirements definition, program management, and contractor oversight—some of whom were DAWIA personnel, while others were non-DAWIA personnel with acquisition-related responsibilities. Figure 3 depicts our sample of DOD’s acquisition workforce and the roles that overlap between the DAWIA workforce and non-DAWIA personnel with acquisition-related responsibilities. A group of organizations within DOD led by DAU officials has begun identifying non-DAWIA personnel with acquisition-related responsibilities for developing requirements in major defense acquisition programs and is requiring specific training for them to perform their role.9 DOD’s focus is on personnel responsible for requirements for major weapon systems, and DOD has not undertaken a similar effort for all non-DAWIA personnel with roles and responsibilities on services acquisitions. As a part of the effort to identify the major weapon system personnel, DAU officials said DOD identified criteria to define the population—including non-DAWIA personnel—who would receive requirements management certification and training. We found that most non-DAWIA personnel with acquisition-related responsibilities on our 29 contracts received some acquisition training, even though DOD does not require or track acquisition training for 11 of the 12 roles of non-DAWIA personnel—the exception being for CORs. The required training was limited and varied, and the current training and education programs for acquisitions do not address services acquisitions. This is different than for DAWIA-certified personnel who have minimum requirements for education, experience, and training. DAU data suggest that demand for training has increased, but DOD has limited metrics to gauge the current size and future demand for training of the population in the long term or the effectiveness of current training that is available. In the short term, however, DOD has taken interim steps to require training and provide resources for some non-DAWIA personnel with acquisition- related responsibilities. The John Warner National Defense Authorization Act for Fiscal Year 2007, Pub. L. No. 109-364 § 801 (2006). we did not assess the quality or effectiveness of any training as a component of our work. Included in the 218 non-DAWIA personnel with acquisition-related responsibilities were 48 personnel who reported that they did not receive any acquisition training, such as:  7 officials who were responsible for developing requirements  3 functional commanders—senior requirements officials of an organization, such as the commanding officer for a missile range;10  1 COR; and  3 of 10 program managers.11 See figure 4 below to see the extent to which non-DAWIA personnel with acquisition-related responsibilities on the 29 contracts we reviewed took training. For the majority of contracts in our sample, a functional commander was not included in the list of non-DAWIA personnel with acquisition-related responsibilities reported to us by DOD. In some instances, there was more than one program manager reported per contract. In others, no program manager was reported on a contract. There is no requirement for a program manager on services contracts. Based on the number of courses completed, DAU faces growing demand for training by non-DAWIA personnel with acquisition-related responsibilities, despite few requirements for training. For 11 selected courses, many of which are recommended by DOD to improve requirements development, non-DAWIA training participation increased from fiscal year 2008 to 2010, as shown in table 1 below. According to DAU officials, DAU does not collect information on why personnel are seeking training or what roles and responsibilities they have on contracts to determine whether the individuals are working on major weapon systems, services acquisitions, or other types of contracts. According to DAU records, two of the courses listed above—the Overview of Acquisition Ethics and the COR Mission with a Focus—accounted for over 75 percent of the increase in the number of Web-based acquisition- related training courses taken by non-DAWIA personnel from 2008-2010. The number of non-DAWIA personnel seeking acquisition training through DAU is expected to increase with the introduction of a contracting officer’s representative course in June 2009 and the Web-based equivalent in August 2010, which is listed above in table 1. Beyond the insight DAU course data provide, DOD has limited information on the demand for and the effectiveness of acquisition training for non-DAWIA personnel with acquisition-related responsibilities. First, tracking acquisition training for non-DAWIA personnel with acquisition-related responsibilities, if it is done at all, is typically conducted for COR training and auditors within organizations to which these personnel are assigned and is decentralized across the department. Second, DAU training participants' course evaluations through the middle of July 2011 rated COR courses positively in job impact and learning effectiveness, but according to a DAU official, these evaluations are completed before participants begin their COR duties, and DAU does not currently request feedback on the value of a course after the training participants have begun their acquisition duties as CORs. DAU officials acknowledged that DAU does not have information to assess the effectiveness of COR training. They explained that COR training is intended to be an introduction to acquisition-related duties and that because DAU’s mission is to focus on DAWIA professionals and its resources are limited, it does not collect more extensive feedback on COR courses for personnel that will not likely remain in the acquisition community because they are often involved in acquisitions as a secondary and not a primary duty. DOD has taken short-term actions to assist non-DAWIA personnel with acquisition-related responsibilities to be successful in the role of COR for services acquisitions. However, DOD has not identified a plan to develop the skills or competencies necessary for other non-DAWIA personnel with acquisition-related responsibilities in other roles. In 2006, 2008, and 2010 DOD recognized the importance of some non-DAWIA personnel with acquisition-related responsibilities in several memoranda requiring that CORs be properly trained and appointed before contract performance begins on a services acquisition to address weaknesses in this key function that the DODIG and we identified. In 2010, DOD developed a COR certification standard that defines minimum COR competencies, experience, and training, based on the complexity of the requirement and contract performance risk. A DOD Instruction, currently in draft form and undergoing review, will give more specificity to the COR certification policy but has not been formally issued and published. Once this training is implemented across DOD, it may only require training for approximately one-fourth of the personnel identified as CORs for the contracts we examined. DOD and DAU officials stated that the training that is currently available through DAU is geared toward weapon systems acquisitions and that they do not have a curriculum developed for services acquisitions or for non-DAWIA personnel with acquisition-related responsibilities, outside of CORs. Recently, DOD and DAU have undertaken initiatives to address training for requirements officials. For example, in 2009 DAU developed optional Service Acquisition Workshops to assist acquisition teams and guide them through the requirements writing process. According to DAU officials, key participants who should participate in the workshop include the program/project manager, contracting officer, and CORs. Both DAWIA and non-DAWIA personnel with acquisition-related responsibilities from a specific services acquisition team participate in the workshop, writing the requirements together and building consensus on their vision and goals for the acquisition. To bridge the gaps in skills and abilities of non-DAWIA personnel with acquisition-related responsibilities who do not have acquisition experience, several organizations across DOD have created a customer liaison capability to assist the requiring activity on services acquisitions in the absence of a program office to facilitate the interaction between the contracting organization and the requiring organization. For example, a Marine Corps contracting office official said they created a customer liaison group of four DAWIA personnel to assist non-DAWIA personnel with the acquisition process, including writing requirements. An Army command used the experience and skills of a former federal contracting officer to provide technical assistance to personnel developing requirements for services acquisitions, usually non-DAWIA personnel with acquisition-related responsibilities. Within the Army Corp of Engineers, a project manager may be assigned to a contract to facilitate the relationship between the requirements and contracting organizations. DLA officials said that some organizations within DLA have an acquisition assistance office to assist in preparing the requirements package. DOD has two other ongoing initiatives to track and train a portion of the non-DAWIA personnel with acquisition-related responsibilities. First, DOD is developing a system to identify and manage CORs that will provide a repository for COR training certificates and monthly contractor surveillance reports. It will also give contracting personnel and requiring activities the means to track and manage COR assignments. The system is anticipated to provide DOD with insight into the size of the active and inactive COR population within DOD. The system is anticipated to be fully implemented during fiscal year 2012. Second, within the non-DAWIA auditing community in DOD, DOD officials said that the DODIG has led a working group, including DAU, to find spaces in a specific DAU course through fiscal year 2011 so the non-DAWIA auditors can get the equivalent training they need for certification based on their current curriculum. In the long term, the working group is also meeting to establish an auditor-specific curriculum at DAU for the non-DAWIA auditors to receive acquisition training to address their specific needs. However, according to the DODIG lead for the working group, long-term plans and funding to support this training initiative for non-DAWIA auditors are uncertain. DOD has made some progress in implementing the outstanding recommendations from the Panel on Contacting Integrity, our previous reports, and other reports that raised issues related to training for non- DAWIA personnel. For the Panel on Contracting Integrity (Panel), the recommendations that were relevant to non-DAWIA personnel with acquisition-related responsibilities focused on managing, training, and certifying CORs. Based on the Panel’s 2007, 2008, 2009, and 2010 reports with recommendations related to CORs and follow-up action by the Panel, we determined that DOD has fully implemented 3, partially implemented 7, and has not implemented 1 of the 11 recommendations. Specifically, in response to the Panel’s recommendation that DOD develop a certification standard for CORs, DOD developed a certification program listing available training resources that meet the standard and defining a reasonable time-phased implementation plan for the standard. An example of one Panel recommendation that remains open is the recommendation to develop an implementation plan for the COR certification process. While DOD has issued a policy memorandum for the COR certification process, it has not yet issued the DOD Instruction that will implement the new certification standard policy. See appendix IV for more detailed information on the Panel recommendations and their implementation status. Our previous work has focused on the roles, responsibilities, and training of the professional DAWIA acquisition workforce and how DOD manages services acquisitions. This is our first report providing insight on non- DAWIA personnel with acquisition-related responsibilities on services acquisitions. Recommendations from previous reports that are related to our population have focused primarily on the role of CORs, which we demonstrate are only a portion of a larger group of non-DAWIA personnel with acquisition-related responsibilities. DOD has also made progress addressing recommendations we made from 2005 to 2009. DOD concurred with the four relevant recommendations, has fully implemented three, and has taken action on the other. In December 2005, DOD issued a memorandum to address our recommendation that surveillance personnel—CORs—are properly trained and appointed before contract award. In December 2006, DOD issued a policy memorandum requiring DOD components to ensure that the contribution of CORs in assisting in the monitoring or administration of contracts is addressed in their performance reviews to address our recommendation that DOD develop practices to help ensure accountability for personnel carrying out surveillance responsibilities. In October 2006, DOD issued an Acquisition Services Policy to address our recommendation that DOD’s service contract review process and associated data collection provide management more visibility over contract surveillance.  Our November 2009 recommendation that the military departments review their procedures to ensure that properly trained surveillance personnel have been assigned prior to and throughout a contract’s period of performance has not been implemented. Ongoing efforts to develop a certification system for all DOD CORs should address this recommendation. See appendix V for a list of our recommendations and additional details on the status of implementation. Finally, the House Armed Services Committee and the Defense Science Board recently issued reports including recommendations related to training for those who are responsible for requirements development for services acquisitions and non-DAWIA personnel with acquisition-related responsibilities but the recommendations were made too recently for us to assess the status of implementation. For example, in March 2010, the House Armed Services Committee Panel on Defense Acquisition Reform Report reported that DOD was not ensuring that personnel with responsibilities for acquisition outcomes acquire the skills, training, and experience needed to properly write, award, and oversee performance of services acquisitions, which could pose a different set of challenges than those associated with the procurement of goods. The report recommended that the department reform the requirements process and establish a clear career path for civilians in the defense acquisition system. In March 2011, a Defense Science Board Task Force report advised that DOD should systematically improve training for personnel involved in services acquisitions and oversight. Non-DAWIA personnel carry responsibilities that are essential to getting good outcomes from DOD's services acquisitions. They are involved in defining requirements, shaping the acquisition decision-making process, and overseeing services acquisitions. While identifying these individuals is challenging, without a clear understanding of this population, DOD does not have sufficient oversight or assurance that the right people with the right skills are involved in the critical phases of services acquisitions to ensure successful outcomes. Challenges in identifying non-DAWIA personnel with acquisition-related responsibilities exist, in part, because the personnel are dispersed throughout the department, come from a variety of career fields, and are often involved in acquisitions as a secondary duty. DOD’s efforts to identify and provide acquisition training to CORs, a portion of non-DAWIA personnel with acquisition-related responsibilities, is a good foundation for building a strategic and sustainable approach to develop the skills and competencies of other non-DAWIA personnel with acquisition-related responsibilities. This diverse population, because of its differences from DAWIA personnel, may require different ways to prepare its members for their unique roles and responsibilities in supporting the services acquisition process. Yet DOD does not have a deliberate approach to identifying non-DAWIA personnel with acquisition-related responsibilities or ensuring they have the skill sets, resources, and tools they need. Apart from the new training for one of the non-DAWIA roles—the CORs—training for non-DAWIA personnel is limited. DOD does not have a way of knowing whether the training these people take is targeted to critical skills and competencies related to carrying out their acquisition responsibilities. Without a departmentwide focus and an organization within DOD with designated responsibility for the population of non-DAWIA personnel with acquisition-related responsibilities—as the professional DAWIA workforce has to provide leadership on training, identification, and development of personnel—it is unclear whether these personnel have the training they need to help ensure that DOD obtains its desired acquisition outcomes. In the area of weapon systems, DOD has taken steps to assure that non- DAWIA personnel are getting needed acquisition training. Specifically, DOD has identified some requirements positions involved in major weapon systems that should receive additional training and built a curriculum designed for this group to obtain certification. This is one of perhaps several approaches to managing an amorphous and transient population within DOD. To help ensure that training and development efforts for non-DAWIA personnel with acquisition-related responsibilities are deliberate and contribute to successful services acquisitions—meaning DOD buys the right thing, the right way, while getting the desired outcomes—we recommend the Secretary of Defense take the following three actions:  establish criteria and a time frame for identifying non-DAWIA personnel with acquisition-related responsibilities, including requirements officials;  assess what critical skills non-DAWIA personnel with acquisition- related responsibilities might require to perform their role in the acquisition process and improve acquisition outcomes; and  designate an organization that has the responsibility to track DOD’s progress in identifying, developing, and overseeing non-DAWIA personnel with acquisition-related responsibilities to help ensure they have the skills necessary to perform their acquisition function. We provided a draft of this report to DOD for comment. In written comments, DOD agreed with our recommendations. DOD provided technical comments, which we incorporated into the report as appropriate. DOD’s comments are reprinted in appendix VI. We are sending copies of this report to interested congressional committees, the Secretary of Defense, and other interested parties. The report is also available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff acknowledgements are listed in appendix VII. The National Defense Authorization Act for Fiscal Year 2010 included a provision requiring that GAO report on the Department of Defense’s (DOD) training for its acquisition and audit workforce.1 Our October 2010 report addressed training provided by the Defense Acquisition University (DAU) to the DAWIA workforce.2 In addition to that report, we agreed to review training provided to non-DAWIA personnel with acquisition-related responsibilities in a noncontingency environment. To accomplish this, we assessed the extent to which (1) DOD knows the composition of the population of non-DAWIA personnel with acquisition-related responsibilities, (2) non-DAWIA personnel with acquisition-related responsibilities are taking acquisition training, and (3) selected recommendations related to non-DAWIA personnel with acquisition- related responsibilities from previous reviews have been implemented. Pub. L. No. 111-84, § 1108 (b)(2) (2009). GAO, Defense Acquisition Workforce: DOD's Training Program Demonstrates Many Attributes of Effectiveness, but Improvement Is Needed, GAO-11-22 (Washington, D.C.: Oct. 28, 2010). We initially selected 33 contracts during the design phase of our work to understand the differences between how goods and services were acquired by DOD. Once we narrowed our scope to focus on service contracts in a noncontingency environment, we removed 3 contracts for goods and a fourth contract that was contingency-related. and personnel involved in the contracts to verify the specific service being provided and to enhance specific details not provided in FPDS-NG such as where work was being conducted. Through these steps, we found FPDS-NG to be reliable for the purposes of this report. For this sample of contracts, we asked DOD contracting and program officials associated with each contract to identify the personnel with roles and responsibilities related to that acquisition, including pre- and postaward responsibilities. We relied on DOD officials to specify whether the personnel involved in each of the selected contracts were DAWIA- certified, and thus a member of the DAWIA workforce, or were non- DAWIA personnel with acquisition-related responsibilities. To gather more specific information from each organization responsible for the contracts in our selected sample, we also interviewed DOD officials, DAWIA contracting personnel, requirements officials, and other personnel who performed specific roles on the contracts from each of the services and DLA. We obtained information about the involvement of non-DAWIA personnel with acquisition-related responsibilities in the selected contracts, the organizations’ training policies when the contracts were awarded, and how the individual organizations each tracked training for CORs and other non-DAWIA personnel with acquisition-related responsibilities. To help determine the roles and responsibilities of acquisition personnel, we reviewed guidance to executive branch agencies that defines the acquisition workforce, including those that may be outside of DOD’s DAWIA definition. However, we did not review executive agencies’ efforts to identify, develop, and train its acquisition workforce. To understand DOD’s ability to define, identify, and track non- DAWIA personnel with acquisition-related responsibilities, we interviewed officials from the Defense Acquisition University (DAU), Defense Procurement Acquisition Policy (DPAP), each of the services’ Director of Acquisition Career Management (DACM) offices, the Air Force Program Executive Office for Combat and Mission Support (AFPEO/CM), the Deputy Assistant Secretary of the Army for Services, the Director for Services Acquisition for the Navy, the Functional Integrated Process Team for Program Management, the Department of Defense Inspector General (DODIG), the Naval Audit Service, the Army Audit Agency, and the Air Force Audit Agency. To identify the extent to which non-DAWIA personnel with acquisition- related responsibilities are taking acquisition training, we asked each service and DLA to report any acquisition training that non-DAWIA personnel with acquisition-related responsibilities associated with our sample had taken and specific training auditors had taken from each respective audit agency noted above. We also asked each audit agency for aggregate counts of the number of their non-DAWIA auditors who worked on contracting and acquisition who had received DAWIA- equivalent certification. In order to confirm training taken by the non- DAWIA personnel with acquisition-related responsibilities for the contracts in our sample we requested DAU training records, training certificates, and locally maintained training records. To identify the demand for DAU acquisition training by non-DAWIA personnel over time, we requested data on 15 classroom and Web-based courses from DAU for fiscal years 2008 through 2010 that were identified from DOD policy documents as training for requirements officials or CORs. We made an effort to only include designated non-DAWIA personnel to establish the amount of training taken and additionally calculated the number of unique individuals by removing duplicate names to provide a more accurate number to the demand for training. However, we were not able to determine whether individuals worked on major weapon systems, services acquisitions, another type of contract, or did not work in acquisition at all. To identify the individual courses that non-DAWIA personnel with acquisition-related responsibilities took and the sources for training on the 29 contracts, we compiled the training identified by DOD officials and cross-referenced individuals listed with DAU’s training database. However, these data sources did not provide us enough information to completely verify the training individuals identified as non-DAWIA personnel with acquisition- related responsibilities have taken. We did not assess the content or the effectiveness of the required or available training. Despite some of the limitations noted above, we found the data to be reliable for the purposes of this report. To understand DOD’s ability to strategically plan for the training or development of non-DAWIA personnel with acquisition-related responsibilities we interviewed officials from DAU, DPAP, and the services’ DACMs. We also interviewed contracting and requirement officials with the Air Force, Army, Navy, and DLA to obtain acquisition training and evidence of completed training. We reviewed relevant legislation, acquisition policy, and service and agency-specific policies and guidance, such as the National Defense Authorization Act for Fiscal Year 2010, the Federal Acquisition Regulation, and the Defense Federal Acquisition Regulation Supplement in order to understand any training requirement for non-DAWIA personnel with acquisition-related responsibilities. To identify the extent to which recommendations addressing non-DAWIA personnel with acquisition-related responsibilities from previous reviews have been implemented, such as the Panel on Contracting Integrity (Panel), we reviewed 2007-2010 annual updates prepared by DOD to address the Panel’s recommendations. Specifically, we compared the recommended actions from the previous reports with each additional report and conducted a comparative analysis of the Panel’s status of each recommendation with our own assessment. We provided our analysis to DPAP officials to review and provide additional information that we considered in making our final determination. We also reviewed past GAO reports that made recommendations on non-DAWIA personnel with acquisition-related responsibilities from 2005-2009 and provided an update on the status of DOD’s implementation or current work to implement past recommendations. We also reviewed more recently issued reports by the House Armed Services Committee Panel and the Defense Science Board that addressed issues impacting non-DAWIA personnel with acquisition-related responsibilities and services acquisitions. We conducted this performance audit from June 2010 to September 2011, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Based on the guidance and policies issued by DOD and the Air Force as well as from information collected on personnel reported in our sample, DOD’s non-DAWIA personnel with acquisition-related responsibilities are a substantial group of DOD civilians and military personnel who perform acquisition duties in their current positions or assignments and are not members of the DAWIA workforce. Following is a list of categories of roles, titles, and a description of the acquisition responsibilities for non- DAWIA personnel with acquisition-related responsibilities. 1. Alternate/assistant contracting officer’s representative (ACOR)/technical assistant/task manager: Serves as support to the COR in the administration of the contract but does not have the authority to provide any technical direction or clarification to the contractor. 2. Contracting officer’s representative (COR): Serves as the onsite technical subject matter expert assessing contractor performance against contract performance standards and recording and reporting this information, including inspecting and accepting supplies and services. The COR represents and is nominated by the requiring organization and designated by the contracting officer. The personnel responsible for developing the requirements may or may not be assigned as the COR on services acquisitions. Regardless, DOD guidance states that the COR should be identified early in the acquisition cycle and included in preaward activities, such as requirements definition/acquisition planning and contract formation processes. In our selected contracts, CORs were sometimes DAWIA personnel, but the majority of them were not. In 17 of 29 contracts, there was more than one COR assigned. 3. Requirements official: Represents an organization with a need for a particular product or service. Requirements officials are responsible for technical requirements, for prescribing contract quality requirements, and for defining the requirement. According to agency officials, acquisition planning activities generally begin when the program office, along with requirements officials, identifies a need. The program office is primarily responsible for conducting market research, defining requirements in a document such as a statement of work, developing cost estimates, and developing a written acquisition plan, if required. In the 29 contracts we reviewed, there were non- DAWIA personnel with acquisition-related responsibilities identified as requirements officials. 4. Source selection board member: Evaluates contract proposals against requirements and recommends contractors for award. 5. Program/project manager: Serves as the principal technical expert and is usually the most familiar with the requirement and best able to identify potential technical trade-offs and determine whether the requirement can be met by a commercial solution. In the absence of a program office or program/project manager, requirements officials from the customer organization serve in a similar role as the program/project manager. In contrast to major weapon programs, for services acquisitions, a program office is not usually established, so the contracting organization works directly with the requirements organization—which typically consists of non-DAWIA personnel with acquisition-related responsibilities in our selected contracts. In our selected contracts, DOD does not require a program/project manager to be appointed for services acquisitions, and there is no requirement for those that are serving in this role for services acquisitions to be DAWIA personnel. Of the contracts in our sample, 13 reported having program/project managers and 16 did not. Some contracts had more than one person serving in a role similar to that of a program/project manager. 6. Legal advisor: Ensures that terms and conditions contemplated are consistent with the government’s legal rights, duties, and responsibilities. Reviews contracting documents and request for proposals for legal sufficiency and advises on acquisition strategies and contracts. 7. Multifunctional team member: Plans and manages services acquisitions throughout the life of the requirement. The functional experts on the team maintain knowledge and provide continuity and stability. The duties, expertise, and contributions of each team member are important to the success of a services acquisition. Of the 29 contracts we reviewed, 24 used multifunctional teams or the equivalent, and 5 did not. 8. Functional commander: Directs or commands the requirements organization responsible for the actual performance of a given service. Identifies mission-essential services and develops, implements, and assists in the execution of services acquisitions. Some responsibilities may include developing acquisition strategy and overseeing performance and monitoring the service throughout the life of the acquisition, including reviewing contractor performance documentation on a regular basis to ensure performance is compatible with the contract and mission objectives. They are also responsible for assigning primary and alternate CORs and assigning functional experts to the multifunctional team. DOD officials in this role are generally non-DAWIA personnel with acquisition-related responsibilities. 9. Auditor: Conducts acquisition and contract-related audits at any phase in the services acquisition life cycle. Non-DAWIA auditors include those in the Army Audit Agency, the Naval Audit Service, the Air Force Audit Agency, and the DOD Inspector General. In our selected contracts, non-DAWIA audit personnel do not have and are not required to receive DAWIA certification. 10. Financial/budget officer: Serves as an advisor for fiscal and budgetary issues. 11. Price analyst: Analyzes and evaluates financial and cost-based data for reasonableness, completeness, accuracy, and affordability at initiation or contract award phases of services acquisitions. 12. Small Business Administration advisor: Serves as the principal advisor and advocate for small business issues. To determine the extent to which training recommendations from the Panel on Contracting Integrity (Panel) have been implemented, we examined whether DOD had implemented the Panel’s recommendations in 2007, 2008, and 2009 by reviewing the 2007, 2008, 2009, and 2010 reports. To assess the implementation of the 2010 recommendations, DPAP provided information on the status of the recommendations. Specifically, we compared the recommended actions from the 2007 report with the reported action in the 2008 report. The same comparative analysis was conducted using the recommended actions from 2008, 2009, and 2010 reports. We differentiated between recommendations that specifically mention training from those that did not, as well as recommendations in which training was involved in the implementation of the recommendation. We analyzed the supporting documents to assess the status, and, based on our review, we assigned one of the following four status assessments to each of the recommendations: 1. Fully Implemented. The entire wording of the action item has been fulfilled. 2. Partially Implemented. Only a portion of the action has been implemented. When the wording of the action item had multiple parts, if one part or a portion of a part had been implemented (but not all parts), we categorized the action item as “partially implemented.” 3. Not Implemented-Action Taken. No part of the action item has been implemented, but steps have been taken toward the completion of the action item. For example, if legislation had been introduced to address the action but had not been enacted into law, we categorized the action item as “not implemented-action taken.” 4. Not Implemented-No Action. No part of the action item has been completed, and no action has been taken to address the action item. For example, if the action item called for changes in legislation but no legislation has even been proposed, we categorized the action item as “not implemented-no action.” We identified previous recommendations involving CORs—identified as surveillance personnel in table 5 below—in reports from 2005-2009 as being relevant to the training or management of non-DAWIA personnel with acquisition-related responsibilities. To determine the status of their implementation by DOD, we obtained and analyzed documentation from agency officials and assigned one of the following four status assessments to each of the recommendations: 1. Fully Implemented. The entire wording of the action item has been fulfilled. 2. Partially Implemented. Only a portion of the action has been implemented. When the wording of the action item had multiple parts, if one part or a portion of a part had been implemented (but not all parts), we categorized the action item as “partially implemented.” 3. Not Implemented-Action Taken. No part of the action item has been implemented, but steps have been taken toward the completion of the action item. For example, if legislation had been introduced to address the action but had not been enacted into law, we categorized the action item as “not implemented-action taken.” 4. Not Implemented-No Action. No part of the action item has been completed, and no action has been taken to address the action item. For example, if the action item called for changes in legislation but no legislation has even been proposed, we categorized the action item as “not implemented-no action.” GAO Draft Report Dated SEPTEMBER, 2011 GAO-11-892 (GAO CODE 120930) “DEFENSE ACQUISITION WORKFORCE: BETTER IDENTIFICATION, DEVELOPMENT AND OVERSIGHT NEEDED FOR PERSONNEL INVOLVED IN ACQUIRING SERVICES” RECOMMENDATION 1: The GAO recommends that the Secretary of Defense establish criteria and a timeframe for identifying non-DAWIA personnel with acquisition-related responsibilities, including requirements officials. (See page /GAO Draft Report.) DoD RESPONSE: Concur. RECOMMENDATION 2: The GAO recommends that the Secretary of Defense assess what critical skills non-DAWIA personnel with acquisition-related responsibilities might require to perform their role in the acquisition process and improve acquisition outcomes (See page 17/GAO Draft Report.) DoD RESPONSE: Concur. RECOMMENDATION : The GAO recommends that the Secretary of Defense designate an organization that has the responsibility to track DOD's progress in identifying, developing, and overseeing non-DAWIA personnel with acquisition-related responsibilities to help ensure they have the skills necessary to perform their acquisition function (See page 17/GAO Draft Report.) DoD RESPONSE: Concur. In addition to the contact above, Penny Berrier, Assistant Director; Patrick Breiding; Heather Miller; John K. Needham; Keo Vongvanith; Morgan Delaney Ramaker; Roxanna Sun; Julia Kennon; and John Krump made key contributions to this report.
In fiscal year 2010, more than half of the $367 billion dollars the Department of Defense (DOD) spent on contracts was spent on services. Buying services is fundamentally different than buying weapon systems, yet most acquisition regulations, policies, processes, and training remain structured for acquiring weapon systems. Over the last decade, reports from GAO, DOD, and Congress have raised issues about services acquisitions and have also highlighted the importance of acquisition training. GAO previously reported on the training provided to the acquisition workforce as defined by the Defense Acquisition Workforce Improvement Act (DAWIA). This report addresses personnel working on services acquisitions who were outside the DAWIA acquisition workforce--termed non-DAWIA personnel with acquisition-related responsibilities--and the extent to which (1) DOD knows the composition of this population, (2) this population is taking acquisition training, and (3) DOD has implemented past recommendations related to this population. To complete this work, GAO reviewed a nongeneralizable sample of 29 service contracts, relevant policies, and recommendations from previous reports and met with key DOD officials. Non-DAWIA personnel with acquisition-related responsibilities represented more than half of the 430 personnel involved in the 29 services acquisition contracts in this review. Several organizations have been tracking and managing the DAWIA workforce, but no DOD organization has systematically identified non-DAWIA personnel with acquisition-related responsibilities, the competencies they need to conduct their acquisition duties, or been designated responsibility for overseeing this group. DOD is not required to identify these personnel and has not established a process to do so. Identifying this population is challenging, partly because, as DOD officials noted, it is a transient one that is dispersed across many DOD organizations. Additionally, these people come from a variety of career fields and are often involved in acquisitions as a secondary duty. DOD has taken action to identify part of this population and provide them training--requirements personnel for major weapon systems--but has not done this for all non-DAWIA personnel with acquisition-related responsibilities. Most non-DAWIA personnel with acquisition-related responsibilities in GAO's sample received some acquisition training. The required training was varied and limited and applied largely to contracting officer's representatives (CORs) and not to other non-DAWIA personnel such as requirements officials, technical assistants, or multifunctional team members. For example, the Air Force required two Air Force-specific phases of training, while the Navy and Marine Corps policy did not specify what training was required. Demand for acquisition training courses by non-DAWIA personnel with acquisition-related responsibilities has been increasing in the past few years at the Defense Acquisition University, but DOD has limited information to gauge the current and future demand for training this population in the long term or the effectiveness of the current training that is available. DOD has taken short-term actions to require training and provide resources for some non-DAWIA personnel with acquisition-related responsibilities. For example, DOD recognized the importance of CORs in several memoranda requiring that they be properly trained and appointed before contract performance begins on services acquisitions. DOD has made some progress in implementing the recommendations of reports from the Panel on Contracting Integrity and GAO that related to management and training of the COR--a portion of non-DAWIA personnel with acquisition-related responsibilities. For example, for the four relevant GAO recommendations--which are related to training, assignment, and oversight of the CORs--DOD fully concurred with all of them, has fully implemented three, and is implementing a COR tracking system to address the remaining recommendation. The House Armed Services Committee and the Defense Science Board issued reports since 2009 that made recommendations that were relevant to this population but were made too recently for GAO to assess their implementation. For example, the House Armed Services Committee Panel on Defense Acquisition Reform report recommended DOD reform the services requirements process in order to address the different set of challenges services acquisitions pose compared to the procurement of goods. Among other things, GAO recommends that DOD establish criteria for identifying non-DAWIA personnel with acquisition-related responsibilities and assess the critical skills needed to perform their role in the acquisition process. DOD concurred with the recommendations.
To date, the commercial space launch industry has primarily focused on putting payloads, such as satellites, into orbit, using launch vehicles that are used only once. The number of launches for this purpose has, however, dropped off, and the industry appears to be increasing its focus on space tourism. Apart from the five manned flights in 2004, efforts thus far have consisted of tests for research and development purposes, but companies are continuing to develop vehicles for manned flights. Concurrently, companies and states are developing additional spaceports to accommodate anticipated commercial space tourism flights, with states providing economic incentives for development. As part of FAA’s mission to promote the commercial space industry, federal funds have also supported infrastructure development at one spaceport. There are three main types of space launches—national security, civil, and commercial. National security launches are by the Department of Defense for defense purposes, and civil launches are by NASA for scientific and exploratory purposes. Commercial launch companies compete domestically and internationally for contracts to carry payloads, such as satellites, into orbit using expendable launch vehicles, which are unmanned, single-use vehicles. Except for the launches of SpaceShipOne in 2004, U.S. commercial space launches have been unmanned. Designed to carry crew and one passenger, SpaceShipOne was the first commercial reusable launch vehicle mission licensed by FAA. After reaching a peak of 22 launches in 1998 (see fig. 1), the number of commercial space launches began to fluctuate and generally decline following a downturn in the telecommunications services industry, which was the primary customer of the commercial space launch industry. In the last several years, two trends have emerged. First, there has been a drop- off in U.S. commercial orbital launches. In part, this may be because the U.S. commercial space launch industry is not price competitive with foreign companies, some of which receive extensive government support, according to Department of Commerce officials. Second, FAA began issuing experimental permits in 2006 to companies seeking to conduct test launches of reusable launch vehicles. According to industry experts that we spoke with, over the past 3 years the commercial space launch industry has experienced a steady buildup of research and development efforts, including ground tests and low-altitude flight tests of reusable rocket- powered vehicles that are capable of takeoffs and landings. Manned commercial space launches took place for the first and only time with the five manned flights of SpaceShipOne in 2004. Although additional manned flights were anticipated, they have not materialized since we issued our report in 2006. A number of companies—including Scaled Composites, which is developing SpaceShipTwo—are continuing to develop vehicles for manned flights, but they are not yet developed to a testing stage, which would require a launch license or experimental permit. Since we reported in 2006, private companies and states are developing additional spaceports to accommodate anticipated commercial space tourism flights and to expand the nation’s launch capacity. In 2006, there were six FAA-licensed spaceports and eight proposed spaceports. Since then, one of the proposed spaceports (Spaceport America in New Mexico) has begun operating and one (Gulf Coast Regional Spaceport) has terminated its plans. Two new spaceports in Florida have applied for FAA licenses. Figure 2 shows the existing and proposed spaceports and federal launch sites used for commercial launches. States have provided economic incentives to developers—including passing legislation to decrease liability and lower the tax burden for developers, according to FAA—to build spaceports to attract space tourism and provide economic benefits to localities; FAA has provided funding assistance for infrastructure development. For example, New Mexico provided $100 million to construct Spaceport America. According to an official from the Oklahoma spaceport, Oklahoma provides approximately $500,000 annually to the spaceport for operations, and the state paid for the environmental impact statement and the safety analysis needed to apply for an FAA license. The Florida Space Authority, a state agency, invested over $500 million in new space industry infrastructure development, including upgrades to the launch pad, a new space operations support complex, and a reusable launch vehicle support complex. The Mid-Atlantic Regional Spaceport receives half of its funding from Virginia and Maryland, with the remainder coming from revenue from operations. According to FAA, Florida and Virginia also passed bills that grant an exemption from state income tax for either launch services or gains achieved from providing services to the International Space Station. In addition, the Mojave Spaceport in California received an FAA Airport Improvement Program grant of $7.5 million to expand an existing runway to allow for the reentry of horizontally landing reusable vehicles. FAA faces challenges in ensuring that it has a sufficient number of staff with the necessary expertise to oversee the safety of commercial space launches and spaceport operations. In addition, FAA will need to determine whether its current safety regulations are appropriate for all types of commercial space vehicles, operations, and launch sites. FAA will also need to develop safety indicators and collect data to help it determine when to begin to regulate crew and passenger safety after 2012. Continuing to avoid conflicts between its dual roles as a safety regulator and an industry promoter remains another issue to consider as the space tourism industry develops. In 2006, we raised concerns that if the space tourism industry developed as rapidly as some industry representatives suggested, FAA’s responsibility for licensing reusable launch vehicle missions would greatly expand. FAA’s experience in this area is limited because its launch safety oversight has focused primarily on unmanned launches of satellites into orbit using expendable launch vehicles. Many companies are developing space hardware of different designs that are being tested for the first time, requiring that FAA have a sufficient level of expertise to provide oversight. In addition, FAA has to have an adequate number of staff to oversee the anticipated growth in the number of launches at various locations. We recommended that FAA assess the levels of expertise and resources that will be needed to oversee the safety of the space tourism industry and the new spaceports under various scenarios and timetables. In response to our recommendations, FAA’s Office of Commercial Space Transportation hired 12 aerospace engineers, bringing its total staff to 71 full-time employees. In addition, since our report, FAA has established field offices at Edwards Air Force Base and NASA’s Johnson Space Center in anticipation of increased commercial space launches. We believe FAA has taken reasonable steps to ensure that it has adequate resources to fulfill its safety oversight role. However, if the industry begins to expand, as senior FAA officials predict, to 200 to 300 annual launches, a reassessment of FAA’s resources and areas of expertise would be appropriate. Moreover, as NASA-sponsored commercial space launches increase, FAA’s need for regulatory resources and expertise may change, according to industry experts we spoke with. FAA faces the challenge of ensuring that its regulations on licensing and safety requirements for launches and launch sites, which are based on safety requirements for expendable launch vehicle operations at federal launch sites, will also be suitable for operations at spaceports. We reported that the safety regulations for expendable launch vehicles may not be suitable for space tourism flights because of differences in vehicle types and launch operations, according to experts we spoke with. Similarly, spaceport operators and experts we spoke with raised concerns about the suitability of FAA safety regulations for spaceports. Experts told us that safety regulations should be customized for each spaceport to address the different safety issues raised by various types of operations, such as different orbital trajectories and differences in the way that vehicles launch and return to earth—whether vertically or horizontally. To address these concerns, we reported that it will be important to measure and track safety information and use it to determine if the regulations should be revised. We did not make recommendations to FAA concerning these issues because the Commercial Space Launch Amendments Act of 2004 required the Department of Transportation (DOT) to commission an independent report to analyze, among other things, whether expendable and reusable vehicles should be regulated differently from each other, and whether either of the vehicles should be regulated differently if carrying passengers. The report, issued in November 2008, concluded that the launch of expendable vehicles, when used to lift reusable rockets carrying crew and passengers, as well as the launch and reentry of reusable launch vehicles with crew and passengers, should be regulated differently from the launch of expendable vehicles without humans aboard. Similar to our finding, the report noted that the development of a data system to monitor the development and actual performance of commercial launch systems and to better identify different launch risk factors and criteria would greatly assist the regulatory process. FAA has not developed such a data system because so few commercial launches have occurred. Although FAA is prohibited from regulating crew and passenger safety before 2012 except in response to serious injuries or fatalities or an event that poses a high risk of causing a serious or fatal injury, FAA is responsible for the protection of the uninvolved public, which could be affected by a failed mission. FAA has interpreted this limited authority as allowing it to regulate crew safety in certain circumstances and has been proactive in issuing a regulation concerning emergency training for crews and passengers. However, FAA has not developed indicators that it would use to monitor the safety of the developing space tourism sector and determine when to step in and regulate human space flight. To allow the agency to be proactive about safety, rather than responding only after a fatality or serious incident occurs, we recommended that FAA identify and continually monitor indicators of space tourism industry safety that might trigger the need to regulate crew and passenger safety before 2012. According to agency officials, FAA has not addressed our recommendation because there have been no launches with passengers. When such launches occur, those same officials told us, they intend to collect and analyze data on safety-related anomalies, safety-critical system failures, incidents, and accidents. Those officials also told us that they intend to develop a means to share information with and assess lessons learned from the private spaceflight industry. It is unclear when FAA will or should begin regulating crew and passenger safety, since data for evaluating risk do not exist. A senior FAA official told us that the agency does not plan to issue new regulations even after the 2012 prohibition is lifted and that they would like to see how the current procedures, which require passengers to sign an acknowledgement of informed consent, operates before deciding to issues new regulations. Nonetheless, FAA is taking steps that will enable it to be prepared to regulate. Space tourism companies that we spoke with stated that they now informally collect lessons learned and share best practices with each other and with FAA, which eventually could lead to industry standards. Senior FAA officials also told us that FAA is reviewing NASA’s human rating of space launch vehicles as well as FAA’s Office of Aviation Safety aircraft certification process as they consider possible future regulations on human spaceflight standards. In addition, FAA’s Office of Commercial Space Transportation expects to work closely with its industry advisory group—the Commercial Space Transportation Advisory Committee—on the issue. We believe FAA is taking reasonable preliminary steps to regulate crew and passenger safety. In 2006, we reported that FAA faced the potential challenge of overseeing the safety of commercial space launches while promoting the industry. While we found no evidence that FAA’s promotional activities—such as sponsoring an annual industry conference and publishing industry studies—conflicted with its safety regulatory role, we noted that potential conflicts may arise as the space tourism sector develops. We reported that as the commercial space launch industry evolves, it may be necessary to separate FAA’s regulatory and promotional activities. Recognizing the potential conflict, Congress required the 2008 DOT-commissioned report to discuss whether the federal government should separate the promotion of human space flight from the regulation of such activity. We suggested as a matter for congressional consideration that, if the report did not fully address the potential for a conflict of interest, Congress should revisit the granting of FAA’s dual mandate for safety and promotion of human space flight and decide whether the elimination of FAA’s promotional role is necessary to alleviate the potential conflict. The 2008 commissioned report concluded there was no compelling reason to remove promotional responsibilities from FAA in the near term (through 2012). Moreover, the report noted that the Office of Commercial Space Transportation’s estimated resource allocation for promotional activities was approximately 16 percent of the office’s budget in fiscal year 2008, which was significantly less than what the office allocated for activities directly related to safety. However, the report noted that the commercial space launch industry will experience significant changes in its environment in the coming decades; therefore, periodic review of this issue is warranted. We concur with the commissioned report’s assessment and see no need for Congress to step in at this time to require a separation of regulatory and promotional activities. However, FAA and Congress must remain vigilant that any inappropriate relationship between FAA and industry— such as was alleged in 2008 between FAA and the airline industry—does not occur with the commercial space launch industry. The expected expansion of the U.S. commercial space launch industry due to anticipated events such as the development of space tourism and the retirement of NASA’s space shuttle and the agency’s shift to using the commercial sector to provide space transportation will affect the federal role in various ways such as increasing FAA’s licensing and regulatory workload. To assist in the expansion of the industry, other issues will emerge for federal agencies and Congress to consider, such as whether to assist the industry in lowering costs by extending existing liability indemnification and how to enhance the global competitiveness of the U.S. industry. Another issue that will emerge as the industry grows is how FAA will integrate space flights with aircraft traffic as part of efforts to develop the next generation air transportation system (NextGen). A national space launch strategy, which is currently lacking, could provide a cohesive framework for addressing such issues and establishing national priorities. Industry experts that we spoke with and senior officials at FAA expect that the number of commercial space launches will increase over the next several years because of the continued development of vehicles for human space flight and in response to prize competitions. Starting in the next 3 to 5 years, senior FAA officials expect several companies to begin offering paying customers the opportunity to fly onboard suborbital space flights, with numerous launches taking place each year. Virgin Galactic is among the companies that are undertaking research and development for launch vehicles designed to serve the anticipated space tourism market. FAA reported in 2008 that the company had sold 250 seats for its flights. Scaled Composites and Virgin Galactic formed a joint venture to develop SpaceShipTwo for Virgin Galactic. Other companies, such as XCOR Aerospace and Armadillo Aerospace, have announced plans to develop vehicles to serve the personal spaceflight market. In addition, prize competitions are expected to spur the growth of the space launch industry. For example, the Northrop Grumman Lunar Lander Challenge featured $1.65 million in prizes for vehicles that can simulate the liftoff and landing of a lunar spacecraft; prizes were awarded to Masten Space Systems and Armadillo Aerospace in November 2009. Both companies told us that they intend to apply for FAA experimental permits soon. In addition, the $30 million Google Lunar X PRIZE is offered to those who can safely land a robot on the surface of the moon, travel 500 meters, and send video images and data to earth by December 2014. Such competitions spur research and development and require FAA licensing or permitting to ensure the safety of the uninvolved public. Senior FAA officials also expect the agency’s licensing and oversight responsibilities to increase as NASA begins to rely on foreign partners and private industry to deliver cargo, and eventually crewmembers, to the International Space Station after it retires the space shuttle in 2010 or shortly thereafter. Two companies—SpaceX and Orbital Sciences—have received NASA contracts to develop new launch vehicles that will service the International Space Station. According to FAA officials and industry experts, test flights for the new vehicles are expected to begin next year with SpaceX at the beginning of the year and Orbital Sciences near the end of the year. FAA is working with SpaceX on its launch license application and Orbital Sciences is in the pre-application phase. FAA has established a field office at the Johnson Space Center in response to the anticipated increase in launches. We reported in 2006 that as the commercial space launch industry expands, it will face key competitive issues concerning high launch costs and export controls that affect its ability to sell its services abroad. Foreign competitors have historically offered lower launch prices than U.S. launch providers, and the U.S. industry has responded by merging launch companies, forming international partnerships, and developing lower-cost launch vehicles. For example, Boeing and Lockheed Martin merged their launch operations to form United Launch Alliance, and SpaceX developed a lower-cost launch vehicle. The U.S. government has responded to the foreign competition by providing the commercial space launch industry support, including research and development funds, government launch contracts, use of its launch facilities, and third-party liability insurance through which it indemnifies launch operators. The continuation of such federal involvement will assist industry growth, according to industry experts that we spoke with. For example, industry players have called for the continuation of indemnification to support U.S. competitiveness. Indemnification secures another party against risk or damage. The U.S. government indemnifies launch operators by providing catastrophic loss protection covering third-party liability claims in excess of required launch insurance in the event of a commercial launch incident. Currently, launch operators are required to buy third-party liability insurance for up to $500 million in addition to insurance for their vehicle and its operations, and the U.S. government provides up to $1.5 billion in indemnification. The law that allows for indemnification expires in December 2009. Some industry experts have said that it is important that the law be extended because the cost of providing insurance for launches could be unaffordable without indemnification. According to a space insurance expert, as there has not been an incident requiring the U.S. government to pay out third-party claims, the cost to the government of providing indemnification has been only for administrative purposes. Nonetheless, according to a senior Commerce official, there is always a possibility of a launch mishap that could invoke indemnification. FAA has asked for the law’s extension as a means to promote the growth of the industry, and the Department of Commerce supports this position. A senior Commerce official told us that without federal indemnification, smaller launch companies may go out of business. In addition, industry representatives that we interviewed told us that export licensing requirements affect the ability of the U.S. commercial space launch industry to sell its services abroad. These regulations are designed to establish controls to ensure that arms exports are consistent with national security and foreign policy interests include launch vehicles because they can deliver chemical, biological, and nuclear weapons. A senior Department of Commerce official told us that the U.S. industry has asked Congress to consider changing the statute that restricts space manufacturing items for export. A change in statute would allow for the Departments of State and Defense to review individual items, as they do for other industries. As the space tourism industry develops, the issue will arise of establishing a foundation for a common global approach to launch safety. According to senior FAA officials, space tourism operations are planned to be international, with takeoffs and landings from U.S. spaceports to United Arab Emirates and Singapore spaceports, among others. Thus, the development, interoperability, and harmonization of safety standards and regulations, particularly concerning space tourism flights, will be important for the safety of U.S. and international space operations. In the future, if suborbital point-to-point space travel becomes a reality, entirely new issues will have to be addressed, including bilateral and international interoperability, air and space traffic integration, existing treaty and law implications, national security issues (such as friend or foe identification), customs, international technical standards, and other transportation issues. In response, FAA has established an international outreach program to promote FAA commercial space transportation regulations as a model for other countries to adopt. The outreach program includes establishing initial contacts with interested countries and introductory briefings about FAA regulations. NextGen—FAA’s efforts to transform the current radar-based air traffic management system into a more automated, aircraft-centered, satellite- based system—will need to accommodate spacecraft that are traveling to and from space through the national airspace system. As the commercial space launch industry grows and space flight technology advances, FAA expects that commercial spacecraft will frequently make that transition and the agency will need tools to manage a mix of diverse aircraft and space vehicles in the national airspace system. In addition, the agency will need to develop new policies, procedures, and standards for integrating space flight operations into NextGen. For example, it will have to define new upper limits to the national airspace system to include corridors for flights transitioning to space; establish new air traffic procedures for flights of various types of space vehicles, such as aircraft-ferried spacecraft and gliders; develop air traffic standards for separating aircraft and spacecraft in shared airspace; and determine controller workload and crew rest requirements for space operations. FAA has begun to consider such issues and has developed a concept of operations document. Finally, an overarching issue that has implications for the U.S. commercial space launch industry is the lack of a comprehensive national space launch strategy, according to federal officials and industry experts. Numerous federal agencies have responsibility for space activities, including FAA’s oversight of commercial space launches, NASA’s scientific space activities, the Department of Defense’s national security space launches, the State Department’s involvement in international trade issues, and the Department of Commerce’s advocacy and promotion of the industry. According to the National Academy of Sciences, aligning the strategies of the various civil and national security space agencies will address many current issues arising from or exacerbated by the current uncoordinated, overlapping, and unilateral strategies. A process of alignment offers the opportunity to leverage resources from various agencies to address such shared challenges as the diminished space industrial base, the dwindling technical workforce, and reduced funding levels, according to the Academy report. A national space launch strategy could identify and fill gaps in federal policy concerning the commercial space launch industry, according to senior FAA and Commerce officials. Our research has identified several gaps in federal policy for commercial space launches. For example, while FAA has safety oversight responsibility for the launch and re-entry of commercial space vehicles, agency officials told us that no federal entity has oversight of orbital operations, including the collision hazard while in orbit posed by satellites and debris (such as spent rocket stages, defunct satellites, and paint flakes from orbiting objects). Another issue that has not been resolved is the role of the National Transportation Safety Board (NTSB) in investigating any accidents that occur. NTSB does not have space transportation explicitly included in its statutory jurisdiction, although it does have agreements with FAA and the Air Force under which it will lead investigations of commercial space launch accidents. The 2008 commissioned report on human space flight suggested that Congress may want to consider explicitly designating a lead agency for accident investigations involving space vehicles to avoid potential overlapping jurisdictions. According to senior officials we spoke with at FAA and Commerce, the need for an overall U.S. space launch policy that includes commercial space launches is being discussed within DOT and across departments, as part of the administration’s review of national space activities, but the development of a national policy has not yet begun. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions from you or other Members of the Subcommittee. For further information on this testimony, please contact Dr. Gerald L. Dillingham at (202) 512-2834 or [email protected]. Individuals making key contributions to this testimony include Teresa Spisak, Maureen Luna- Long, Rosa Leung, Erica Miles, David Hooper, and Elizabeth Eisenstadt. FAA has assessed resources and hired 12 additional aerospace engineers. FAA’s Office of Commercial Space Transportation should develop a formal process for consulting with the Office of Aviation Safety about licensing reusable launch vehicles. FAA has not developed a formal process, but the two offices signed a formal agreement for the licensing of SpaceShipTwo, which delineates the responsibilities for each office. Agency officials expect that a similar process will be used as future applications are received. FAA should identify and continually monitor space tourism safety indicators that might trigger the need to regulate crew and flight participant safety before 2012. No action has been taken on monitoring safety indicators because commercial human space flights have not occurred since the SpaceShipOne launches in 2004. When commercial human space flights occur, FAA plans to monitor key safety indicators including safety-related anomalies, safety-critical system failures, incidents, and accidents. FAA officials plan to track these indicators, precursors, trends, or lessons learned that would warrant additional FAA regulation. FAA should develop and issue guidance on the circumstances under which it would regulate crew and flight participant safety before 2012. No action has been taken to issue guidance. However, senior FAA officials say that the agency has held internal discussions on the circumstances under which it would regulate crew and space flight participant safety before 2012 in the event of a casualty or close call. The officials noted that launch vehicle operators are required to report to FAA mishaps and safety-related anomalies and failures and take appropriate corrective actions prior to the next launch. As long as it has a promotional role, FAA should work with the Department of Commerce to develop a memorandum of understanding that clearly delineates the two agencies’ respective promotional roles in line with their statutory obligations and larger agency missions. FAA’s Office of Commercial Space Transportation and Commerce’s Office of Space Commercialization signed a memorandum of understanding in September 2007. FAA has no agreement with Commerce’s International Trade Administration, which also has responsibilities for promoting the commercial space industry and its competitiveness. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Since the Government Accountability Office (GAO) reported on the commercial space launch industry in 2006, the industry has evolved and moved further toward space tourism. Commercial space tourism promises to make human space travel available to the public for the first time. The Federal Aviation Administration (FAA) oversees the safety of commercial space launches, licensing and monitoring the safety of such launches and of spaceports (sites for launching spacecraft), and FAA promotes the industry. FAA is also responsible for overseeing the safety of space tourism, but it may not regulate crew and passenger safety before 2012 except in response to high-risk incidents, serious injuries, or fatalities. This testimony addresses (1) recent trends in the commercial space launch industry, (2) challenges that FAA faces in overseeing the industry, and (3) emerging issues that will affect the federal role. This statement is based on GAO's October 2006 report on commercial space launches, updated with information GAO gathered from FAA, the Department of Commerce, and industry experts in November 2009 on industry trends and recent FAA actions. In past work, GAO recommended that FAA take several actions to improve its oversight of commercial space launches, including assessing its future resource needs. FAA has taken some steps to address the recommendations. Recent Trends: Historically, the commercial space launch industry focused primarily on putting payloads, such as satellites, into orbit, using launch vehicles that did not return to earth. Such launches have, however, dropped off, and the industry is increasing its focus on space tourism. Since five manned commercial flights demonstrated the potential for commercial space tourism in 2004, companies have pursued research and development and are further developing reusable vehicles for manned flights. Concurrently, companies and states are developing additional spaceports to accommodate anticipated increases in commercial space launches. States have provided economic incentives, and FAA has provided some funding for development. Oversight Challenges: In overseeing the commercial space launch industry, including the safety of space tourism, FAA faces several challenges. These include maintaining a sufficient number of staff with the necessary expertise to oversee the safety of launches and spaceport operations; determining whether FAA's current safety regulations are appropriate for all types of commercial space vehicles, operations, and launch sites; developing information to help FAA decide when to regulate crew and passenger safety after 2012; and continuing to avoid conflicts between FAA's regulatory and promotional roles. Emerging Issues: The U.S. commercial space launch industry is expected to expand as space tourism develops and the National Aeronautics and Space Administration starts to rely on the commercial sector for space transportation. This expansion will affect the federal role. For example, FAA will face increases in its licensing and regulatory workload, and federal agencies and Congress will face decisions about whether to support the U.S. industry by continuing to provide liability indemnification to lower its costs. Additionally, FAA will face policy and procedural issues when it integrates the operations of spacecraft into its next generation air transportation system. Finally, coordinating the federal response to the commercial space industry's expansion is an issue for the federal government in the absence of a national space launch strategy for setting priorities and establishing federal agency roles.
According to ISS, over 28,000 publicly-traded corporations globally send out proxy statements each year that contain important facts about more than 250,000 separate issues on which shareholders are asked to vote. Votes are solicited on a variety of key issues that could potentially affect the corporations’ value, such as the election of directors, executive compensation packages, and proposed mergers and acquisitions, as well as other, more routine, issues that may not affect value, such as approving an auditor and changing a corporate name. The proxy statement typically includes a proxy ballot (also called a proxy card) that allows shareholders to appoint a third party (proxy) to vote on the shareholder’s behalf if the shareholder decides not to attend the meeting. The shareholder may instruct the proxy how to vote the shares or may opt to grant the proxy discretion to make the voting decision. The proxy card may be submitted to the company via the mail or online. The proxy advisory industry has grown over the past 20 years as a result of various regulatory and market developments. The management of a mutual fund’s or pension plan’s assets, including the voting of proxies, is often delegated to a person who is an investment adviser subject to the Investment Advisers Act of 1940. In a 1988 letter, known as the “Avon Letter,” the Department of Labor took the position that the fiduciary act of managing employee benefit plan assets includes the voting of proxies associated with shares of stock owned by the plan. According to industry experts, managers of employee retirement plan assets began to seek help in executing their fiduciary responsibility to vote proxies in their clients’ best interests. Consequently, the proxy advisory industry—particularly ISS, which had been established in 1985—started to grow. According to industry experts, ISS’s reputation and dominance in the proxy advisory industry continued to grow in the 1990s and early 2000s, fueled by the growing fiduciary requirements of institutional investors and increased shareholder activism. This increased shareholder activism has been attributed in part to reaction by investors to the massive financial frauds perpetrated by management of public companies, including the actions that led to the bankruptcies of Enron and WorldCom. Many institutional investors sought the services of proxy advisory firms to assist in their assessments of the corporate governance practices of publicly traded companies and to carry out the mechanics of proxy voting. Finally, in 2003, SEC adopted a rule and amendments under the Investment Advisers Act of 1940 that requires registered investment advisers to adopt policies and procedures reasonably designed to ensure that proxies are voted in the best interests of clients, which industry experts also cited as a reason for the continued growth of the proxy advisory industry. Today, the proxy advisory industry is comprised of five major firms, with ISS serving as the dominant player with over 1,700 clients. The other four firms—Marco Consulting Group (MCG), Glass Lewis & Co. (Glass Lewis), Proxy Governance, Inc. (PGI), and Egan-Jones Proxy Services (Egan- Jones)—have much smaller client bases and are relatively new to the industry: Glass Lewis, PGI, and Egan-Jones were all created within the past 6 years. Founded in 1985, ISS serves clients with its core business, which includes analyzing proxy issues and offering research and vote recommendations. ISS also provides Web-based tools and advisory services to corporate issuers through ISS Corporate Services, Inc. a separate division established in 1997 which was spun-out into a wholly- owned subsidiary in 2006. RiskMetrics Group, a financial risk management firm, acquired ISS in January 2007. RiskMetrics Group provides risk management tools and analytics to assist investors in assessing risk in their portfolios. MCG was established in 1988 to provide investment analysis and advice to Taft-Hartley funds and has since expanded its client base to public employee benefit plans. Glass Lewis, established in 2003, provides proxy research and voting recommendations and was acquired by Xinhua Finance Limited, a Chinese financial information and media company, in 2007. Established in 2004, PGI offers proxy advice and voting recommendations and is a wholly-owned subsidiary of FOLIOfn, Inc., a financial services company that also provides brokerage services and portfolio management technology for individual investors and investment advisers. Egan-Jones was established in 2002 as a division of Egan-Jones Ratings Company, which was incorporated in 1992. Egan-Jones provides proxy advisory services to institutional clients to facilitate making voting decisions. Of the five major proxy advisory firms, three—ISS, MCG, and PGI—are registered with SEC as investment advisers and are subject to agency oversight, while according to corporate officials, the other two firms are not. In their SEC registration filings, the three registered firms have identified themselves as pension consultants as the basis for registering as investment advisers under the Investment Advisers Act. Although Glass Lewis initially identified itself as a pension consultant and registered with SEC as an investment adviser, it withdrew its registration in 2005. According to SEC officials, an investment adviser is not required to disclose a reason for its decision to withdraw its registration in the notice of withdrawal filed with SEC. Officials from Glass Lewis and Egan-Jones did not elaborate on their decisions not to be registered with SEC as investment advisers, other than to note that their decisions were made with advice from their respective counsel. In the proxy advisory industry, various conflicts of interest can arise that have the potential to influence the research conducted and voting recommendations made by proxy advisory firms. The most commonly cited potential for conflict involves ISS, which provides services to both institutional investor clients and corporate clients. Several other circumstances may lead to potential conflicts on the part of proxy advisory firms, including situations in which owners or executives of proxy advisory firms have an ownership interest in or serve on the board of directors of corporations that have proposals on which the firms are offering vote recommendations. Although the potential for these types of conflicts exists, in its examinations of proxy advisory firms that are registered as investment advisers, SEC has not identified any major violations, such as a failure to disclose a conflict, or taken any enforcement actions to date. Industry professionals and institutional investors we interviewed cited ISS’s business model as presenting the greatest potential conflict of interest associated with proxy advisory firms because ISS offers proxy advisory services to institutional investors as well as advisory services to corporate clients. Specifically, ISS provides institutional investor clients with recommendations for proxy voting and ratings of companies’ corporate governance. In addition, ISS helps corporate clients develop proposals to be voted on and offers corporate governance consulting services to help clients understand and improve their corporate governance ratings. Because ISS provides services to both institutional investors and corporate clients, there are various situations that can potentially lead to conflicts. For example, some industry professionals stated that ISS could help a corporate client design an executive compensation proposal to be voted on by shareholders and subsequently make a recommendation to investor clients to vote for this proposal. Some industry professionals also contend that corporations could feel obligated to subscribe to ISS’s consulting services in order to obtain favorable proxy vote recommendations on their proposals and favorable corporate governance ratings. One industry professional further believes that, even if corporations do not feel obligated to subscribe to ISS’s consulting services, they still could feel pressured to adopt a particular governance practice simply to meet ISS’s standards even though the corporations may not see the value of doing so. ISS has disclosed and taken steps to help mitigate situations that can potentially lead to conflicts. For example, on its Web site, ISS explains that it is “aware of the potential conflicts of interest that may exist between proxy advisory service … and the business of ISS Corporate Services, Inc. .” The Web site also notes that “ISS policy requires every ISS proxy analysis to carry a disclosure statement advising the client of the work of ICS and advising ISS’s institutional clients that they can get information about an issuer’s use of ICS’s products and services.” In addition, some institutional investors we spoke with noted that ISS has on occasion disclosed to them, on a case-by-case basis, the existence of a specific conflict related to a particular corporation. In addition to disclosure, ISS has implemented policies and procedures to help mitigate potential conflicts. For example, according to ISS, it has established a firewall that includes maintaining separate staff for its proxy advisory and corporate businesses, which operate in separate buildings and use segregated office equipment and information databases in order to help avoid discovery of corporate clients by the proxy advisory staff. ISS also notes on its Web site that it is a registered investment adviser and is subject to the regulatory oversight of SEC. In addition, according to ISS’s Web site, corporations purchasing advisory services sign an agreement acknowledging that use of such services does not guarantee preferential treatment from ISS’s division that provides proxy advisory services. All of the institutional investors—both large and small—we spoke with that subscribe to ISS’s services said that they are satisfied with the steps that ISS has taken to mitigate its potential conflicts. Most institutional investors also reported conducting due diligence to obtain reasonable assurance that ISS or any other proxy advisory firm is independent and free from conflicts of interest. As part of this process, many of these institutional investors said they review ISS’s conflict policies and periodically meet with ISS representatives to discuss these policies and any changes to ISS’s business that could create additional conflicts. Finally, as discussed in more detail later in this report, institutional investors told us that ISS’s recommendations are generally not the sole basis for their voting decisions, which further reduces the chances that these potential conflicts would unduly influence how they vote. Although institutional investors said they generally are not concerned about the potential for conflicts from ISS’s businesses and are satisfied with the steps ISS has taken to mitigate such potential conflicts, some industry analysts we contacted said there remains reason to question the steps’ effectiveness. For example, one academic said that while ISS is probably doing a fair job managing its conflicts, it is difficult to confirm the effectiveness of the firm’s mitigation procedures because ISS is a privately-held company, thereby restricting information access. Moreover, according to another industry analyst, because ISS’s recommendations are often reported in the media, the corporate consulting and proxy advisory services units could become aware of the other’s clients. In addition to the potential conflict of interest discussed above, several other situations in the proxy advisory industry could give rise to potential conflicts. Specifically: Owners or executives of proxy advisory firms may have a significant ownership interest in or serve on the board of directors of corporations that have proposals on which the firms are offering vote recommendations. A few institutional investors told us that such situations have been reported to them by ISS and Glass Lewis, both of which, in order to avoid the appearance of a conflict, did not make voting recommendations. Institutional investors may submit shareholder proposals to be voted on at corporate shareholder meetings. This raises concern that proxy advisory firms will make favorable recommendations to other institutional investor clients on such proposals in order to maintain the business of the investor clients that submitted these proposals. Several proxy advisory firms are owned by companies that offer other financial services to various types of clients, as is common in the financial services industry, where companies often provide multiple services to various types of clients. This is the case at ISS, Glass Lewis, and PGI, and may present situations in which the interests of different sets of clients diverge. SEC reviews registered investment advisers’ disclosure and management of potential conflicts, as well as proxy voting situations where a potential conflict may arise. Specifically, SEC’s Office of Compliance Inspections and Examinations monitors the operations and conducts examinations of registered investment advisers, including proxy advisory firms. An SEC official stated that, as part of these examinations, SEC may review the adequacy of disclosure of a firm’s owners and potential conflicts; particular products and services that may present a conflict; the independence of a firm’s proxy voting services; and the controls that are in place to mitigate potential conflicts. As discussed previously, three of the five proxy advisory firms (ISS, MCG, and PGI) are registered as investment advisers while Glass Lewis and Egan-Jones are not. According to SEC, to date, the agency has not identified any major violations of applicable federal securities laws in its examinations of proxy advisory firms that are registered as investment advisers and has not initiated any enforcement action against these firms. As the dominant proxy advisory firm, ISS has gained a reputation with institutional investors for providing reliable, comprehensive proxy research and recommendations, making it difficult for competitors to attract clients and compete in the market. As shown below in table 1, ISS’s client base currently includes an estimate of 1,700 institutional investors, more than the other four major firms combined. Several of the institutional investors we spoke with that subscribe to ISS’s services explained that they do so because they have relied on ISS for many years and trust it to provide reliable, efficient services. They said that they have little reason to switch to another service provider because they are satisfied with the services they have received from ISS over the years. Because of ISS’s clients’ level of satisfaction, other providers of proxy advisory services may have difficulty attracting their own clients. In addition, because of its dominance and perceived market influence, corporations may feel obligated to be more responsive to requests from ISS for information about proposals than they might be to other, less- established proxy advisory firms, resulting in a greater level of access by ISS to corporate information that might not be available to other firms. Industry analysts explained that, in addition to overcoming ISS’s reputation and dominance in the proxy advisory industry, proxy advisory firms must offer comprehensive coverage of corporate proxies and implement sophisticated technology to attract clients and compete. For instance, institutional investors often hold shares in thousands of different corporations and may not be interested in subscribing to proxy advisory firms that provide research and voting recommendations on a limited portion of these holdings. As a result, proxy advisory firms need to provide thorough coverage of institutional holdings, and unless they offer comprehensive services from the beginning of their operations, they may have difficulty attracting clients. In addition, academics and industry experts we spoke with said that new firms need to implement a sophisticated level of technology to provide the research and proxy vote execution services that clients demand. The initial investment required to develop and implement such technology can be a significant expense for firms. Although newer proxy advisory firms may face challenges attracting clients and establishing themselves in the industry, several of the professionals we spoke with believed that these challenges could be overcome. For example, while firms may need to offer comprehensive coverage of corporate proxies in order to attract clients and although ISS might have access to corporate information that other firms do not, much of the information needed to conduct research and offer voting recommendations is easily accessible. Specifically, anyone can access corporations’ annual statements and proxy statements, which are filed with SEC, are publicly available, and contain most of the information that is needed to conduct research on corporations and make proxy voting recommendations. Also, although developing and implementing the technology required to provide research and voting services can be challenging, various industry professionals told us that once a firm has done so, the marginal cost of providing services to additional clients and of updating and maintaining such technology is relatively low. Some of the competitors seeking to enter the proxy advisory industry in recent years that we spoke with have offered their services as alternatives to ISS. Specifically, they have attempted to differentiate themselves from ISS by providing only proxy advisory services to institutional investor clients. ISS’s competitors have chosen not to provide corporate consulting services in part to avoid the potential conflicts that exist at ISS. Proxy advisory firms have also attempted to differentiate themselves from the competition on the basis of the types of services provided. For example, some firms have started to focus their research and recommendation services on particular types of proxy issues or on issues specific to individual corporations. The institutional investors we spoke with had a variety of opinions about the level of competition in the industry. Some questioned whether the existing number of firms is sufficient, while others questioned whether the market could sustain the current number of firms. However, many of the institutional investors believe that increased competition could help reduce the cost and increase the range of available proxy advisory services. For example, some institutional investors said that they have been able to negotiate better prices with ISS because other firms have recently entered the market. While some of these newer proxy advisory firms have attracted clients, it is too soon to tell what the firms’ ultimate effect on competition will be. We conducted structured interviews with 31 randomly selected institutional investors to gain an understanding of the ways in which they use proxy advisory firms and the influence that such firms have on proxy voting. Of the 20 large institutional investors we interviewed, 19 reported that they use proxy advisory services in one or more ways that may serve to limit the influence that proxy advisory firms have on proxy voting results (see table 2), while only 1 reported relying heavily on a proxy advisory firm’s research and recommendations. The following summarizes several of the reasons that large institutional investors’ reliance on proxy advisory firms’ research and recommendations is limited: Most of the large institutional investors we spoke with (15 out of 20) reported that they generally rely more on their own in-house research and analyses to make voting decisions than on the research and recommendations provided by their proxy advisory services providers. These institutional investors tend to have their own in-house research staffs, and their in-house research reportedly drives their proxy voting decisions. They explained that they use the research and recommendations provided by proxy advisory firms to supplement their own analysis and as one of many factors they consider when deciding how to vote. In addition, many (14) of the large institutional investors we contacted reported that they subscribe to a customized voting policy that a proxy advisory firm executes on the institutions’ behalf. These institutional investors develop their own voting policies and guidelines that instruct the advisory firm how to vote on any given proxy issue. In such instances, the proxy advisory firms simply apply their clients’ voting policies, which then drive the voting decisions. Further, 8 of the large institutional investors we contacted explained that they subscribe to more than one proxy advisory firm to help determine how to vote. These institutional investors said that they consider multiple sets of proxy advisory firm research and recommendations to gain a broader range of information on proxy issues and to help make well-informed voting decisions. We also interviewed representatives from 11 smaller institutional investors, and the results of these interviews suggest that proxy advisory firm recommendations are of greater importance to these institutions than they are to the large institutional investors we spoke with. In particular, representatives from smaller institutional investors were more likely to say that they rely heavily on their proxy advisory firm and vote proxies based strictly on the research and recommendations of their firm, given these institutions’ limited resources. Consequently, the level of influence held by proxy advisory firms appears greater with these smaller institutional investors. However, whether large or small, all of the institutional investors we spoke with explained that they retain the fiduciary obligation to vote proxies in the best interest of their clients irrespective of their reliance on proxy advisory firms. Institutional investors emphasized that they do not delegate this responsibility to proxy advisory firms and retain the right to override any proxy advisory firm recommendations, limiting the amount of influence proxy advisory firms hold. In addition, large and small institutional investors reported that they tend to provide greater in-house scrutiny to, and rely even less on, proxy advisory firm recommendations about certain high-profile or controversial proxy issues, such as mergers and acquisitions or executive compensation. Institutional investors’ perspectives on the limited influence of proxy advisory firms reflected what we heard from professionals that we spoke with who have knowledge of the industry. Many of these industry analysts and academics agreed that large institutional investors would be less likely than small institutional investors to rely on proxy advisory firms, because large institutions have the resources available to conduct research and subscribe to more than one proxy advisory service provider. These professionals also thought that large institutional investors would be likely to use proxy advisory firms as one of several factors they consider in the research and analysis they perform to help them decide how to vote proxies. Further, several believed that small institutional investors would be more likely to vote based strictly on proxy advisory firms’ recommendations, because they do not have the resources to conduct their own research. The results of our work suggest that the overall influence of advisory firms on proxy vote outcomes may be limited. In particular, large institutional investors, which cast the great majority of proxy votes made by all institutional investors with over $1 billion in assets, reportedly place relatively less emphasis on the firms’ research and recommendations than smaller institutional investors. However, we could not reach a definitive conclusion about the firms’ influence because the institutional investors we contacted were not necessarily representative of all such investors. Further, we could not identify any studies that comprehensively isolated advisory firm research and recommendations from other factors that may influence institutional investors’ proxy voting. We provided a draft of this report to SEC for its review and comment. SEC provided technical comments, which we incorporated into the final report, as appropriate. We also provided relevant sections of the draft to the proxy advisory firms for a technical review of the accuracy of the wording and made changes, as appropriate, based on the firms’ comments. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of this report until 30 days from the report date. At that time we will provide copies of this report to the Chairman and Ranking Member, Senate Committee on Banking, Housing, and Urban Affairs; the Chairman, House Committee on Financial Services; the Chairman, House Subcommittee on Capital Markets, Insurance, and Government Sponsored Enterprises, Committee on Financial Services; other interested committees; and the Chairman of the Securities and Exchange Commission (SEC). We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix II. Our objectives were to (1) identify potential conflicts of interest that exist with proxy advisory firms and the steps that the Securities and Exchange Commission (SEC) has taken to oversee these firms; (2) review the factors that might impede or promote competition in this industry; and (3) analyze institutional investors’ use of proxy advisory services to help vote proxies and the influence proxy advisory firms may have on proxy voting. To determine the types of potential conflicts of interest that could arise in the proxy advisory industry, we conducted a literature review and examined studies relating to potential conflicts that may arise in this industry. Further, we interviewed various professionals with knowledge of the proxy advisory industry, including industry experts, academics, industry association representatives, and proxy advisory firm representatives, as well as institutional investors and officials at SEC. We selected these professionals based, in part, on literature searches we conducted on topics relating to proxy advisory and corporate governance services, as well as referrals by several of the professionals we met with. The professionals we spoke with represent a wide range of perspectives, and include experts from academia, business, government, and professional organizations. We did not attempt to assess any of the proxy advisory firms’ conflict mitigation policies or procedures and, therefore, did not come to any conclusions about the adequacy of these policies or procedures. To gain an understanding of SEC’s oversight of proxy advisory firms, we reviewed relevant investment adviser regulations and examinations conducted by SEC since 2000 and interviewed agency officials. We did not attempt to assess the adequacy of SEC’s oversight. To identify the factors that might impede or promote competition in this industry, we reviewed the relevant literature and examined studies relating to the level of competition in the industry, and we spoke with various industry professionals. We did not attempt to evaluate the level of competition in this industry and, therefore, did not come to any conclusions about the extent to which competition exists. Finally, to explore institutional investors’ use of proxy advisory services to help vote proxies and the influence proxy advisory firms may have on proxy voting, we conducted structured interviews with 31 institutional investors selected randomly by type, including mutual funds, corporate pension funds, government pension funds, and union pension funds, as well as asset management institutions. Our sample included several of the largest institutional investors and was derived from Standard & Poor’s Money Market Directories (January 2006). The sample consisted of a population of mutual funds and pension funds with over $1 billion in assets, and included large and small institutional investors from each investor type. We defined “large” and “small” institutional investors as the top and bottom 15 percent of each institutional investor type. In total, these large and small institutional investors accounted for over 72 percent of assets under management held by mutual funds and pension funds with over $1 billion under management. Although we randomly selected these institutional investors, the size of the sample was small and may not necessarily be representative of the universe of institutional investors. As a result, we could not generalize the results of our analysis to the entire population of institutional investors. We conducted structured interviews with 20 large and 11 small institutional investors. Initially, we had contacted a total of 126 mutual funds and pension funds that were randomly selected from our sample of institutional investors and 20 (13 large and 7 small institutions) reported using proxy advisory firm services and agreed to participate in our structured interviews. The other 106 institutional investors we had initially contacted declined to participate in the structured interviews for several reasons. In particular, many of these institutions said that they do not vote proxies themselves, but rather hire asset management institutions to both manage their investment portfolios and vote proxies on their behalf. We conducted interviews with 11 (7 large and 4 small institutions) of these asset management institutions, which were referred to us by several of the pension funds we had initially contacted. The results of these asset manager interviews are included among the total of 20 large and 11 small institutional investors that we interviewed. In addition, some of the 106 institutional investors declined to participate because they vote proxies themselves or do not vote proxies at all, while others refused to participate or could not be reached. In our structured interviews with the 31 institutional investors, we spoke with officials from the organizations who are responsible for proxy voting activities. We asked these officials a variety of questions relating to their institutions’ policies on proxy voting and use of proxy advisory firms. Further, we asked the officials to comment on potential conflicts of interest associated with proxy advisory firms, steps taken to mitigate such potential conflicts, and the level of competition in the proxy advisory industry. Finally, we spoke with various industry professionals discussed earlier to gain their perspectives on the influence of proxy advisory firms. We could not identify any studies that comprehensively measured the influence that these firms have on proxy voting. We conducted our work in Washington, D.C., between September 2006 and June 2007 in accordance with generally accepted government auditing standards. In addition to the above contact, Wes Phillips, Assistant Director; Emily Chalmers; Rudy Chatlos; Eric Diamant; Fred Jimenez; Yola Lewis; and Omyra Ramsingh made key contributions to this report.
At annual meetings, shareholders of public corporations can vote on various issues (e.g., mergers and acquisitions) through a process called proxy voting. Institutional investors (e.g., mutual funds and pension funds) cast the majority of proxy votes due to their large stock holdings. In recent years, concerns have been raised about a group of about five firms that provide research and recommendations on proxy votes to their institutional investor clients. GAO was asked to report on (1) potential conflicts of interest that may exist with proxy advisory firms and the steps that the Securities and Exchange Commission (SEC) has taken to oversee these firms; (2) the factors that may impede or promote competition within the proxy advisory industry; and (3) institutional investors' use of the firms' services and the firms' potential influence on proxy vote outcomes. GAO reviewed SEC examinations of proxy advisory firms, spoke with industry professionals, and conducted structured interviews with 31 randomly selected institutional investors. GAO is not making any recommendations. Various potential conflicts of interest can arise at proxy advisory firms that could affect vote recommendations, but SEC has not identified any major violations in its examinations of such firms. In particular, the business model of the dominant proxy advisory firm--Institutional Shareholder Services (ISS)--has been the most commonly cited potential conflict. Specifically, ISS advises institutional investors how to vote proxies and provides consulting services to corporations seeking to improve their corporate governance. Critics contend that corporations could feel obligated to retain ISS's consulting services in order to obtain favorable vote recommendations. However, ISS officials said they have disclosed and taken steps to mitigate this potential conflict. For example, ISS discloses the potential conflict on its Web site and the firm's policy is to advise clients of relevant business practices in all proxy vote analyses. ISS also maintains separate staff who are located in separate buildings for the two businesses. While all institutional investors GAO spoke with that use ISS's services said they are satisfied with its mitigation procedures, some industry analysts continue to question their effectiveness. SEC conducts examinations of advisory firms that are registered as investment advisers and has not identified any major violations. Although new firms have entered the market, ISS's long-standing position has been cited by industry analysts as a barrier to competition. ISS has gained a reputation for providing comprehensive services, and as a result, other firms may have difficulty attracting clients. Proxy advisory firms must offer comprehensive coverage to compete and need sophisticated systems to provide the services clients demand. But firms interested in entering the market do have access to much of the information needed to make recommendations, such as publicly available documents filed with SEC. Competitors have attempted to differentiate themselves from ISS by, for example, providing only proxy advisory services and not corporate consulting services. While these firms have attracted clients, it is too soon to tell what their ultimate effect on enhancing competition will be. Among the 31 institutional investors GAO spoke with, large institutions reportedly rely less than small institutions on the research and recommendations offered by proxy advisory firms. Large institutional investors said that their reliance on proxy advisory firms is limited because, for example, they have in-house staff to assess proxy vote issues and only use the research and recommendations offered by proxy advisory firms to supplement such research. In contrast, small institutional investors have limited resources to conduct their own research and tend to rely more heavily on the research and recommendations offered by proxy advisory firms. The fact that large institutional investors cast the great majority of proxy votes made by institutional investors and reportedly place relatively less emphasis on advisory firm research and recommendations could serve to limit the firms' overall influence on proxy voting results.
The ManTech Program is designed to enable DOD to develop advanced technologies to use in manufacturing weapon systems. Such technologies, in turn, should reduce weapon system costs and improve quality. ManTech projects address development of technology in areas such as metals, composite materials, electronics, munitions, as well as technology to sustain weapons systems. The users of the ManTech Program are service and DLA managers responsible for the development of new weapons systems and for the repair, maintenance and overhaul of fielded systems. However, the projects are executed through agreements or contracts with several types of organizations including defense contractors, government facilities, suppliers, consortia, centers of excellence, academia, and research institutes. The military services and DLA execute the ManTech Program under the general direction of the Director, Defense Research and Engineering, Office of the Deputy Under Secretary of Defense (Science & Technology), Office of Technology Transition. Each component has established a ManTech office within its organization to set policies and procedures for operating its ManTech program and determining which projects to fund. DOD established the Joint Defense Manufacturing Technology Panel, staffed by service and DLA ManTech office personnel, to set program objectives, promote effective integration and program management, conduct joint planning, and oversee program execution. It reports to and receives taskings from the Director of Defense Research & Engineering on manufacturing technology issues of multiservice concern and application. The panel organized the program into subpanels to serve as focal points for specific technology areas. ManTech Program appropriations have fluctuated significantly over the past several years, and annually since fiscal year 1991, the Congress has appropriated more funds to the program than the services requested in the Presidents’ budgets. The funding trends for the program since fiscal year 1991 are shown in figure 1. In addition, funding by DOD component has also fluctuated. Figure 2 shows the funding for the services from fiscal years 1997 to 2001. Users in the military services and DLA look to the ManTech Program to help meet certain needs related to weapons systems they are responsible for, such as developing technologies, products and processes that will reduce the total cost and improve the manufacturing quality of their systems. Users reported to us that the ManTech projects we selected in our analysis were generally addressing their needs. In addition, the military services and DLA have processes in place that include users in the project identification and selection process. Such processes increase the likelihood that projects will meet user needs. However, the extent to which some needs are being met is limited by factors related to each program, such as the amount of funding available. During fiscal years 1999 and 2000, DOD had a total of 234 active ManTech projects valued at about $372 million. From that list, we selected 52 projects in the DOD components valued at $206 million and discussed with users whether those projects were responding to their needs. These users told us that the ManTech Program is generally meeting their needs. The projects we selected resulted in improvements ranging from a project that developed new technology to reduce the time and cost required to produce submarine and surface ship propellers; to a project that increased the reliability of electrical circuits used in missile systems by protecting them against dirt and moisture; to a project that enabled the Air Force to replace 83 parts in its F-119 engine with one part and reduce the weight of the engine by 54 pounds. By implementing such projects, officials from the military services and DLA told us that they were able to save tens of millions of dollars. Table 1 provides detailed examples of projects that users reported to us met their needs. Congress has consistently provided more funding for DOD’s ManTech Program than requested in the President’s budget. For example, in fiscal years 2000 and 2001, the Army received an additional $66.5 million in ManTech funds, of which $45.5 million or nearly 70 percent was designated for the Army’s ManTech munitions efforts. These efforts included such projects as developing a more cost-effective and safer manufacturing process for an advanced explosive compound. The Congress believed such efforts were not receiving sufficient funds in the past. The extent to which the ManTech Program meets users needs is due partly to the process by which projects are identified and selected for funding. Furthermore, the statute requires the participation of the prospective technology users in establishing requirements for advanced manufacturing technology. The services and DLA have different planning cycles and criteria for project selection. However, they all have processes that include users in the identification and selection of projects. The processes generally include steps to determine and consolidate users’ needs, select the projects to be funded, and perform the work. The following figure depicts the generic ManTech project and identification and selection process. We found that the number of projects selected for inclusion in the ManTech Program differs from the number proposed because of funding limitations. Most of the funding each year is allocated to projects already underway that require multi-year funding. Only a few proposed projects are selected as new starts. Table 2 shows the number of projects proposed and selected for fiscal year 2001. Even though the services and DLA employ different types of selection mechanisms and criteria, they all include users in this process. For example, the Army and the Navy annually solicit ideas for projects from the major subordinate commands where weapons systems are managed. The Air Force encourages users to submit ideas for projects on a continuing basis. All three services require that before a project can be considered for funding, prospective users of the technology endorse the projects. DLA relies on regular dialog with its supply service centers to raise issues related to manufacturing technology for the programs for which it is responsible. Table 3 further details how the services and DLA identify, select, and fund their projects. Some factors limit the extent to which the services and DLA can respond to certain needs. Those limitations include canceling some projects that have not yet been started, terminating projects already underway, or postponing projects already approved for funding because of insufficient funding. For example, the Navy conducts its program through a network of Centers of Excellence and allocates program funding based on what each center received in the past. This strategy helps all of the centers remain viable through the life of their contracts, but demands for projects at a particular center in any given year may be greater than funding at that center. This outcome may result in some projects not being funded, and therefore some users’ most urgent ManTech needs may not be met. For example, for fiscal year 2001, two lower priority Naval Sea Systems Command projects were selected for funding because the command’s higher priority projects were for Centers of Excellence with insufficient funds to meet all demands. Also, several Army and one ManTech official in the Office of the Under Secretary of Defense whom we talked to expressed concern about the Army’s requirement for a program manager cost share on certain projects and a validated cost analysis on all projects. Two of the officials believed that there were projects that would benefit Army weapons systems but would not be selected for funding because (1) it was not possible to obtain a program manager cost share, or (2) a validated cost analysis could not be done for projects with environmental, health, or safety benefits. According to the officials, these projects would help meet user needs by reducing the total cost of ownership or improving the quality of weapons systems. However, our review of a number of Army projects did not reveal any that fell into these categories. Another Army ManTech official and an official from the Office of the Under Secretary of Defense believed that validated cost analyses served a useful purpose in weeding out projects without measurable financial benefits. One official expressed concern about the extent to which the Army relies on validated cost analyses to select projects for funding. The other official did not think the cost analysis was the best or only way to screen projects. However, neither official had alternative suggestions. Additionally, Air Force ManTech officials expressed concern that users’ future needs may not be met to the same extent as they have been in recent years. This is because the Air Force Materiel Command may have to absorb a budget shortfall of $100 million in science and technology funding, which includes the ManTech program. As a result, the Materiel Command proposes reducing the Air Force ManTech Program by more than a quarter over the next 5 years between fiscal years 2003 and 2007 or $77.6 million in total. According to ManTech managers, the Air Force may have to terminate some on-going projects and/or cancel planned projects to address the funding shortfall. For the most part, the services and DLA awarded work performed under the ManTech Program using competitive procedures. Of the 36 contracting actions we reviewed, 10 were awarded without competition. In each case, there was a documented justification to award the work on a sole source basis. Table 4 further illustrates the extent to which the services and DLA award their projects competitively and details the rationale for specific sole source awards. DOD is not managing the ManTech Program as efficiently and effectively as possible. Specifically, it is not conducting as many joint projects as it could and therefore is missing opportunities to leverage the limited funding available for ManTech projects. Additionally, DOD does not effectively measure the program’s success. Joint projects are those that are jointly funded; have planned implementation benefiting more than one component; or are managed with joint decision-making. These projects allow the services and DLA to leverage their programs by sharing the financial and managerial burdens for projects that can benefit more than one defense component. This is especially important given the limited ManTech budget and the small number of new projects each year that are approved for funding. For example, one currently funded joint project is expected to achieve affordability goals for forged components used on fighter aircraft. The project is expected to benefit the Joint Strike Fighter, the Navy’s F/A-18, and the Air Force’s F-22. The Navy’s National Center for Excellence in Metalworking Technology is managing this project and both the Navy and the Air Force are providing ManTech funds. Another project is expected to achieve significant cost reductions by further developing composite friendly aircraft designs, simulation tools, and material and manufacturing processes. The Air Force, the Navy, and the Army are contributing funds for this project. In fiscal year 2001, joint projects represented 16 of 124 projects, or only 13 percent of all projects reviewed last year. Another 84 projects, or 68 percent, had potential to benefit more than one DOD component, but were not otherwise joint projects. For example, one project would improve, demonstrate, and implement a process for coating electrical circuits to seal them against dirt and moisture, which would increase the reliability of the circuits. This Army project would benefit a number of Army missile systems, such as the Javelin and the Patriot Advanced Capability-3, and the Program Executive Office for Army Tactical Missiles will contribute $750,000 over a 4-year period. In addition, the project could benefit various Air Force and Navy missile systems. Also, according to the Navy ManTech Director, more DOD-wide benefits could accrue through more joint participation in the Best Manufacturing Practices Center of Excellence. The objective of the center is to improve the quality, reliability, and performance of the U.S. defense industrial base. The center identifies and disseminates best practices used by industry to foster technology transfer and improve the competitiveness of the industrial base thereby improving cost, schedule and product performance. The Associate Director, Manufacturing Technology & Affordability, in the Office of the Deputy Under Secretary of Defense (Science & Technology), Office of Technology Transition agreed that more joint programs would help the services and DLA leverage their funding and would facilitate the transfer of technology resulting from ManTech efforts. The Joint Defense Manufacturing Technology Panel, the organization DOD has charged with the joint oversight of the ManTech Program, recognizes the importance of jointly funded and managed programs. Annual reviews of on-going projects conducted by various subpanels include, among other things, identification of the degree to which all projects are joint. Current guidance does not require projects already funded and in process be reviewed for joint participation, but the panel is revising the guidance to include a review of projects that are being considered or have been selected for funding but have not yet started. However, the draft guidance states that these types of projects would not be rated for their degree of jointness. Proposed topics for review would include a discussion of competing technologies or approaches and related work underway or completed, but stops short of identifying potential projects for joint funding or management. DOD does not know the full extent of the success of the ManTech program because it does not track the outcomes past the initial implementation. Statute requires that DOD prepare an annual report for the Congress that includes, among other things, an assessment of the effectiveness of the ManTech Program, including a description of all completed projects and plans and status of implementation of the technologies and processes being developed under the program. For each project listed, the report lists the objective for the project, the completion date, the amount of ManTech funding for the year, the potential beneficiaries for the project, the implementation site, and the expected return on the investment in terms of future cost avoidance. Nevertheless, while the report responds to a congressional requirement, it falls short of validating the long-term benefits predicted for the ManTech program. And currently, DOD lacks a methodology and process for doing so. The ManTech Program could be assessed by providing contractors with a financial incentive to track and report project results or by evaluating project proposals based on a contractor’s plans to track and report on implementing it. In addition, DOD could periodically commission an independent survey or study. An external review of the ManTech Program in 1998 stated that while the data on the return on investment for selected projects was impressive, DOD should seek review by an independent third party of projects at the service and agency level. By tracking and validating the long-term benefits of the program, DOD would be able to measure the actual return on investment of a particular project. The department would also know what technologies had been successfully transferred and the extent to which the ManTech Program improved the quality of weapons systems. Without soliciting an independent review or developing a standard for quantifying benefits, DOD cannot be sure that the ManTech Program is providing the financial benefits that have been estimated or that users’ long-term needs are being met. Further, it will not have a reliable basis for making decisions on its budgetary priorities and tradeoffs. The Navy, Army, Air Force and DLA all have processes that include users in establishing requirements for ManTech programs. Each service and DLA, however, separately selects, funds, and implements their ManTech program. While users report that the program has been meeting their technology needs, some ManTech officials expressed concern that funding was insufficient. At the same time, however, DOD has not been fully taking advantage of opportunities to leverage funding by conducting joint projects. The Joint Defense Manufacturing Technology Panel’s effort to revise its guidance on reviewing planned ManTech projects should provide an opportunity to identify candidates for joint funding and implementation. Finally, DOD does not currently have an effective means to measure the results of completed projects. Without a means for determining project benefits, DOD will not know whether the ManTech program is meeting the long-term needs of users. DOD and the services need to build on existing efforts to identify and conduct joint ManTech projects. The Joint Defense Manufacturing Technology Panel’s proposal to get involved earlier and review the services’ planned projects is a constructive step forward toward facilitating more joint projects. We recommend that DOD develop additional measures to coordinate the services’ planning cycles, budgets, and project selection criteria to better position them to identify and conduct joint projects. We also recommend that DOD develop a more systematic means for determining the results of ManTech projects. This may be done, for example, by (1) using an award or incentive fees to motivate contractor tracking of ManTech benefits over time, (2) including a requirement to track and report implementation as an evaluation criterion for awarding ManTech work, or (3) conducting or contracting for periodic surveys and/or studies of the industrial base to quantify the impact of ManTech projects. In written comments on a draft of this report, DOD partially concurred with our first recommendation on the need to build on existing efforts to conduct joint ManTech projects and concurred with our second recommendation on the need to develop a more systematic means to determine the results of ManTech projects. With respect to the first recommendation, DOD emphasized that the Joint Defense Manufacturing Technology Panel already provides an effective model for how to plan, coordinate, execute, fund, and implement joint ManTech activities and that this warrants positive recognition. DOD further stated that in comparison to other DOD programs that are overseen at the Office of the Under Secretary of Defense level but funded by the military services and defense agencies, the implementation of “only” 16 joint projects should be viewed in a more positive context. However, DOD acknowledged that more could be done to improve the process for developing joint projects. Toward that end, the panel is modifying its process and will review projects that have not yet started or that have recently begun and will rate these projects on the degree to which they are joint. In addition, DOD stated the panel will review the services' and DLA's planning cycles to identify opportunities for more effective coordination of planned projects. We agree that the Joint Defense Manufacturing Technology Panel has helped to improve the coordination of the services and DLA programs and facilitate the implementation of certain joint projects. For example, the 16 jointly funded active projects are evidence that DOD does jointly plan and conduct ManTech projects. However, we continue to believe that additional opportunities exist for pursuing joint projects. This is reflected in the fact that the Panel identified another 84 active projects that could benefit more than one DOD component but were not jointly funded, planned, or managed. The Panel’s new review process is a step in the right direction to facilitate more joint projects. However, as with the old process, projects will be reviewed for jointness only after the services and DLA have already selected them for funding. This could limit the extent to which a project can be jointly planned, funded or managed since it is likely the requirements have already been determined. The action initiated by the Joint Defense Manufacturing Technology Panel to review the components' planning cycles is also a positive measure, provided that the results are used to facilitate more joint planning earlier in the process. DOD also provided technical comments that we incorporated into the report as appropriate. DOD’s comments appear in appendix II. We will send copies of the report to the Chairmen and the Ranking Minority Members of other appropriate congressional committees; the Secretary of Defense; and the Director, Office of Management and Budget; and other interested parties. We will also make copies available to others on request. Please contact me at (202) 512-4841 or John Oppenheim at (202) 512-3111 if you or your staff have any questions concerning this report. Other major contributors to this report were Myra Watts Butler, Cristina Chaplain, Dayna Foster, Gaines Hensley, and Stephanie May. To determine if projects funded by the program are responsive to the needs of the military services and the Defense Logistics Agency, we reviewed the processes, policy memoranda, and guidance for identifying manufacturing needs, prioritizing those needs and presenting them for consideration for funding at both the systems command ManTech program director level and the weapon system program office level. We discussed various manufacturing technology-related issues including overseeing responsibilities with officials from the Office of the Deputy Under Secretary of Defense (Science & Technology), Office of Technology Transition; the Office of the Under Secretary of Defense for Acquisition and Technology, Deputy Under Secretary of Defense for Science and Technology; the Office of the Deputy Assistant Secretary of the Army for Research and Technology; the Office Naval Research; and the Office of the Assistant Secretary of the Air Force (Acquisition), Science, Technology, and Engineering. At the ManTech program director level, we reviewed memoranda, guidance, and processes for identifying manufacturing needs, prioritizing those needs and project formation. We also met with management officials responsible for implementing the ManTech Program. For example, we met with officials from the Office of Naval Research, Industrial and Corporate Programs Detachment, Manufacturing Technology Program Office, in Arlington, Virginia and Philadelphia, Pennsylvania; the Army Research Laboratory, Aberdeen Proving Ground, Maryland; the Air Force Research Laboratory, Materials and Manufacturing Directorate at Wright Patterson Air Force Base, Ohio; and Defense Logistics Agency at Fort Belvoir, Virginia. To further assess user’s satisfaction, we spoke directly with ManTech users concerning their involvement in the ManTech Program and whether the projects were meeting their needs. However, we did not validate reported successes of the program. We identified the users from a selected number of active projects in fiscal year 1999 and 2000 for the Navy, Army, Air Force and Defense Logistics Agency. Specifically, for the Navy, we met with officials of various program executive offices and program managers from the Naval Sea Systems Command at Arlington, Virginia; the Naval Air Systems Command at Patuxent River, Maryland; and the Marine Corps Systems Command at Quantico, Virginia. For the Army, we met with representatives from several missile and aviation weapon systems at the Army Aviation and Missile Command located in Redstone Arsenal, Alabama; the Army Armaments Research and Development Center in Picatinny Arsenal, New Jersey; the Army Materiel Command in Alexandria, Virginia; the Air and Missile Defense Program Executive Office in Huntsville, Alabama; the Aviation Program Executive Office at Redstone Arsenal, Alabama; and Ground Combat Support Systems Program Executive Office at Picatinny Arsenal, New Jersey. For the Air Force, we met with representatives from the Joint Air-To-Surface Standoff Missile Program and the Joint Direct Attack Munitions Program at Eglin Air Force Base, Florida; the Joint Strike Fighter Program, F-119 Engine Program, the Engine Directorate, and Air Force Materiel Command Logistics office at the Wright Patterson Air Force Base, Ohio. To determine whether work being performed under the ManTech Program is being awarded on a competitive basis, we first reviewed the guidance and policy for competitive awards. We interviewed contracting officials as well as engineers who manage ManTech projects to obtain their views concerning specific projects. To assess the degree to which projects are awarded competitively, we randomly selected a sample of ManTech projects from the above list of fiscal years 1999 and 2000 projects for the Army, Navy, Air Force and DLA based on levels of funding, length of the projects, and varying types of technologies and weapons systems. We then reviewed the contract files to determine whether competitive award procedures were used. Because of the way the Navy is organized, we also selected five of nine centers of excellence and reviewed their policies, guidance and processes on competing projects. Specifically, we visited the Center of Excellence for Composites Manufacturing Technology (South Carolina Research Authority) in North Charleston, South Carolina; Electronics Manufacturing Productivity Facility (American Competitiveness Institute) in Philadelphia, Pennsylvania; Navy Joining Center (Edison Welding Institute) in Columbus, Ohio; National Center for Excellence in Metalworking Technology (Concurrent Technologies Corporation) in Johnstown, Pennsylvania; and Gulf Coast Region Maritime Technology Center (University of New Orleans College of Engineering) in New Orleans, Louisiana. We obtained the legal advice of our General Counsel on questionable sole source projects.
The Department of Defense (DOD) established the Defense Manufacturing Technology Program to develop and apply advanced manufacturing technologies to reduce the total cost and improve the manufacturing quality of weapon systems. By maturing and validating emerging manufacturing technology and transferring it to the factory floor, the program bridges the gap between technology invention and industrial application. The program, which has existed in various forms since the 1950's, received about $200 million in funding fiscal year 2001. DOD's Office of the Under Secretary of Defense provides guidance and oversight to the Army, Navy, Air Force, and the Defense Logistics Agency (DLA), but each establishes its own policies and procedures for running the program and determines which technologies to develop. Users told GAO that the program was responding to their needs by developing technologies, products, and processes that reduced the cost and improved the quality of weapons systems. To the extent practicable, DOD used competitive procedures to award the work done under the program. The Army, Air Force, and DLA competitively awarded most of the projects GAO reviewed for fiscal years 1999 and 2000, and the remaining non-competitive awards were based on documented sole source justifications. DOD is missing opportunities to conduct more joint programs and lacks effective measures of program success. Joint projects would enable the services to address the funding issue by leveraging limited funding and integrating common requirements and approaches for developing manufacturing technologies.
Prostate cancer patients choose among multiple treatments that are often considered equally appropriate but can have different risks and side effects. The treatments can also vary in cost, with IMRT being one of the most costly options. Cancer of the prostate—a gland located at the base of the urinary bladder—is the second most common cancer among men in the United States, with approximately 1 in 6 men receiving a diagnosis of prostate cancer in his lifetime. In 2010, there were an estimated 218,000 new cases of prostate cancer and approximately 32,000 deaths due to prostate cancer. Most men in the United States are diagnosed with prostate cancer as a result of an abnormal digital rectal exam or prostate- specific antigen test. After an abnormal test result, beneficiaries often undergo a prostate biopsy, during which a provider—typically a urologist—removes small amounts of prostate tissue. Another provider then examines the tissue to determine whether a beneficiary has prostate cancer. IMRT is one of multiple treatment options available to patients with prostate cancer. The type of treatment a prostate cancer patient chooses depends on a number of different factors such as life expectancy, overall health, personal preferences, provider recommendations, and the clinical characteristics of a patient’s prostate cancer. For many men, multiple treatment options are considered equally appropriate. For instance, IMRT, brachytherapy, and a radical prostatectomy are all among the treatments considered appropriate for men with low-risk prostate cancer. Even though such treatments are often considered equally appropriate, the risks and side effects for each treatment are different. Compared to IMRT, prostate cancer patients undergoing a radical prostatectomy have a higher rate of short term urinary problems and erectile dysfunction but do not face bowel-related side effects, which are experienced by some men undergoing IMRT. Compared to IMRT, prostate cancer patients undergoing brachytherapy have lower rates of bowel-related side effects but about 1 in 10 patients undergoing brachytherapy experience acute urinary retention. Also, several studies have reported that physician recommendations play a large role in influencing a patient’s decision, and another study found that the use of a particular prostate cancer treatment decreased after its payment was reduced, suggesting that financial incentives may have influenced treatment decisions.are generally not required to disclose to their patients that they have a Currently, providers who self-refer IMRT services financial interest in the service.treatments are summarized in table 1. Medicare reimbursement rates for IMRT delivery services varied over time, and rates are not directly comparable between settings. Beneficiaries receive approximately 45 separate IMRT delivery services over several weeks during a course of IMRT to treat prostate cancer. Medicare beneficiaries predominantly receive IMRT delivery services in two settings—physician offices or hospital outpatient departments. The Medicare reimbursement per IMRT delivery service increased from approximately $319 to $421 from 2006 to 2010 and then to $484 by 2013 for services performed in hospital outpatient departments. For services performed in physician offices, the reimbursement rate decreased from approximately $690 to $511 from 2006 to 2010 and then to $406 by 2013. The reimbursement rates for IMRT delivery services performed in physician offices and hospital outpatient departments are not directly comparable. For instance, if an IMRT delivery service was performed in a hospital outpatient department, payment includes the technical component for image guidance, which is almost always furnished with an IMRT service. In physician offices, image guidance is reimbursed separately. Researchers have consistently found that courses of IMRT, which include IMRT delivery and other services, are more costly than other treatments for prostate cancer, with the exception of proton therapy. Researchers have found IMRT to be more costly despite differences among studies in design and methodology, such as the services counted toward total treatment costs, the duration of time during which costs are studied (e.g., first year costs vs. lifetime costs), and the patient population studied. One recent study found that, among men diagnosed with prostate cancer in 2005, the cost to Medicare per course of treatment was approximately $14,000 to $15,000 higher for men receiving IMRT ($31,574) than for men who received brachytherapy ($17,076) or a prostatectomy ($16,469 or $16,762, depending on the type of prostatectomy). Despite the 2013 reduction in the Medicare reimbursement rate for IMRT delivery services performed in physician offices, we found that IMRT remains substantially more expensive than other treatments for prostate cancer, with the exception of proton therapy. We found that the number of and expenditures for Medicare prostate cancer–related IMRT services performed by self-referring groups grew rapidly from 2006 through 2010. In contrast, the number of and Medicare expenditures for prostate cancer–related IMRT services performed by non-self-referring groups declined over the period. From 2006 through 2010, the number of prostate cancer–related IMRT services performed by self-referring groups increased rapidly, while the number performed by non-self-referring groups decreased. The number of prostate cancer–related IMRT services performed by self-referring groups increased from approximately 80,000 to 366,000, an annual growth rate of 46 percent (see fig. 1). Consistent with that growth, the number of self-referring groups also increased rapidly over the period. In contrast, the number of prostate cancer–related IMRT services performed by non-self-referring groups in physician offices decreased from approximately 490,000 to 466,000, an annual decrease of 1 percent. The rapid increase in prostate cancer–related IMRT services performed by self-referring groups coincided with several other trends from 2006 through 2010. First, the number of prostate-cancer related IMRT services performed in hospital outpatient departments and by self-referring and non-self-referring groups all grew from 2006 to 2007. After 2007, the rapid increase in prostate cancer–related IMRT services performed by self- referring groups coincided with declines in these services within hospital outpatient departments and among non-self-referring groups. Overall utilization of prostate cancer–related IMRT services therefore remained relatively flat across these settings after 2007, indicating a shift away from hospital outpatient departments and non-self-referring groups and toward self-referring groups. (See app. II for information on the trends in IMRT services performed in hospital outpatient departments.) Second, while the number of prostate cancer–related IMRT services provided to Medicare fee-for-service (FFS) beneficiaries has stabilized since 2007, the percentage of newly diagnosed Medicare beneficiaries receiving IMRT has increased. While seemingly contradictory, these two trends occurring simultaneously can in part be explained by (1) a decrease in the total number of Medicare FFS beneficiaries from 2006 through 2010 and (2) a decrease in the number of men newly diagnosed with prostate cancer. Third, the increasing percentage of prostate cancer patients receiving IMRT may partially be explained by a shift from an older form of EBRT—3D-CRT—to a newer form—IMRT, though the largest effect of this substitution likely occurred earlier in our study period as IMRT largely replaced 3D-CRT by 2007. In 2010, urologists performed approximately 89.1 percent of office visits billed under limited-specialty groups, compared to 5.7 percent for multispecialty groups. Additionally, the average number of specialties that billed office visits under limited-specialty groups in 2010 was 3.3, compared to 36.2 for multispecialty groups. 56,000 to 343,000. In contrast, the number of such services performed by multispecialty self-referring groups, which were comprised of a large number of different provider types, declined slightly, going from approximately 23,000 to 22,000. Medicare expenditures for prostate cancer–related IMRT services performed by self-referring groups increased rapidly from 2006 through 2010, while decreasing for services performed by non-self-referring groups. Specifically, expenditures for prostate cancer–related IMRT services performed by self-referring groups increased from $52 million to $190 million, an average increase of 38 percent a year (see fig. 3). In contrast, expenditures for prostate cancer–related IMRT services performed by non-self-referring groups in physician offices declined by an average of 8 percent a year. For comparison, expenditures for prostate cancer–related IMRT services performed in hospital outpatient departments grew an average of 7 percent a year during the period we studied. (For more information about hospital outpatient department expenditure trends, see app. II.) Self-referring providers were more likely to refer their Medicare prostate cancer patients for IMRT and less likely to refer them for other treatments when compared to non-self-referring providers. In addition, after providers began self-referring IMRT services, they substantially increased the percentage of their prostate cancer patients they referred for IMRT, in contrast to providers who did not begin to self-refer IMRT services during the same period. Self-referring providers were more likely to refer their prostate cancer patients for IMRT and less likely to refer them for other treatments compared to non-self-referring providers. Self-referring providers referred approximately 52 percent of their patients who were newly diagnosed with prostate cancer in 2009 for IMRT, while non-self-referring providers referred 34 percent of their patients for IMRT (see table 2). Self-referring providers also referred a lower percentage of their prostate cancer patients for nearly all other types of treatments compared to non-self- referring providers, with the largest differences among patients being referred for brachytherapy or a radical prostatectomy. Other differences were smaller—self-referring providers were about 8 percent less likely to refer their patients for active surveillance compared to non-self-referring providers. (For alternative groupings in which beneficiaries are sorted into discrete treatment categories, see app. III.) The difference between self-referring and non-self-referring providers in the percentage of their prostate cancer patients referred for IMRT was largely due to self-referring providers who belonged to limited-specialty groups. Self-referring providers who belonged to a limited-specialty group referred approximately 52 percent of their patients diagnosed with prostate cancer in 2007 or 2009 for IMRT. In contrast, self-referring providers who belonged to a multispecialty group referred approximately 36 percent of their patients diagnosed with prostate cancer in 2007 or 2009 for IMRT, only moderately higher than the 33 percent of non-self- referring providers’ patients diagnosed with prostate cancer in 2007 or 2009 who were referred for IMRT. Differences in the percentage of prostate cancer patients referred for IMRT between self-referring and non-self-referring providers persisted after accounting for differences in age, geographic location (i.e., urban or rural), and beneficiary health, including clinical characteristics of prostate cancers for a subset of beneficiaries who lived in New York. Differences between self-referring and non-self-referring providers in the percentage of prostate cancer patients that were referred for IMRT could not be explained by differences in age. The average age when a beneficiary was diagnosed with prostate cancer was the same for patients of both self-referring and non-self-referring providers, and, regardless of their patients’ ages, self-referring providers were more likely to refer their patients for IMRT compared to non-self-referring providers. The average age when a beneficiary was diagnosed with prostate cancer was 74 years old for patients of both self-referring and non-self-referring providers. Depending on the age range, self-referring providers were anywhere from 48 percent to 62 percent more likely to refer their patients for IMRT compared to non-self-referring providers. For more information about how the percentage of prostate cancer patients referred for IMRT and other treatments by self-referring and non-self-referring providers changed on the basis of the age of a beneficiary, see appendix IV. Differences between self-referring and non-self-referring providers in the percentage of prostate cancer patients that were referred for IMRT could not be explained by differences in geographic location. Self-referring providers were more likely to refer their patients for IMRT compared to non-self-referring providers, regardless of differences in geographic location. Self-referring providers were 52 percent more likely to refer their patients that lived in urban areas for IMRT compared to non-self- referring providers. Similarly, self-referring providers were 42 percent more likely to refer their patients that lived in rural areas for IMRT compared to non-self-referring providers. Differences between self-referring and non-self-referring providers in the percentage of prostate cancer patients that were referred for IMRT could not be explained by differences in beneficiary health. Self-referring and non-self-referring providers’ prostate cancer patients had a similar average health status, and self-referring providers were more likely to refer their patients for IMRT compared to non-self-referring providers, regardless of whether their patients had low-, intermediate-, or high-risk prostate cancer. Self-referring providers’ patients had an average risk score—a proxy for health status—of 0.94 in 2009, and non-self-referring providers’ patients had an average risk score of 0.92, indicating that the two patient populations had a similar average health status. In cases where we had information on the clinical characteristics of patients’ prostate cancer, we found that self-referring providers were more likely than non-self-referring providers to refer their patients for IMRT, although the difference decreased as prostate cancer risk level increased. Specifically, self-referring providers were 91 percent, 41 percent, and 33 percent more likely than non-self-referring providers to refer patients with low-, intermediate-, and high-risk prostate cancer for IMRT, respectively. The difference in IMRT referrals made by self-referring and non-self-referring providers narrowed as patients’ prostate cancer risk level increased in part because non-self-referring providers increased IMRT referrals and decreased brachytherapy referrals as cancer risk levels increased. In comparison, self-referring providers referred similarly small percentages of patients for brachytherapy for all three risk levels, and their IMRT referrals increased only moderately as their patients’ risk level increased. Providers that switched from being non-self-referring to self-referring— that is, switchers—referred a greater percentage of their prostate cancer patients for IMRT after they began to self-refer (see table 3). Specifically, switchers referred 37 percent of their patients who were diagnosed with prostate cancer in 2007 for IMRT. After beginning to self-refer, switchers referred 54 percent of their patients who were diagnosed with prostate cancer in 2009 for IMRT. While providers that did not begin to self-refer— that is, self-referrers and non-self-referrers—referred different percentages of their patients who were diagnosed with prostate cancer in 2007 for IMRT, the percentages of their patients they referred for IMRT remained relatively consistent over the same period when switchers dramatically increased the percentage of their patients they referred for IMRT. This suggests that the increase seen among switchers was likely not due to provider characteristics that were relatively stable over time or changes in the way all providers treated prostate cancer in response to such things as changing treatment guidelines. (See app. V for more information about how the percentage of beneficiaries switchers, non-self- referring providers, and self-referring providers referred for a given treatment.) IMRT has been shown to be an effective treatment option for localized prostate cancer and allows radiation to be delivered to the tumor while minimizing damage to normal tissue. Proponents of self-referral arrangements contend that the self-referral of IMRT services does not affect clinical decision making and that patients benefit from self-referral through, for example, improved coordination among the providers who diagnose and treat patients. However, our review indicates that Medicare providers that self-referred IMRT services—particularly those practicing in limited-specialty groups—were substantially more likely to refer their prostate cancer patients for IMRT and less likely to refer them for other, less costly treatments, especially brachytherapy or a radical prostatectomy, compared to providers who did not self-refer. The relatively higher rate of IMRT referrals among self-referring providers cannot be explained by beneficiary age, geographic location, or health. Consistent with these findings, we also found that after providers began to self-refer IMRT services they substantially increased the percentage of their prostate cancer patients they referred for IMRT, while providers that did not begin to self-refer experienced much smaller changes over the same period. Taken together, our findings suggest that financial incentives were likely a major factor driving the increase of IMRT referrals among self-referring providers in limited-specialty groups. The greater use of IMRT by self-referring Medicare providers to treat prostate cancer raises two potential concerns. First, because physician recommendations play a large role in influencing a patient’s treatment decision, a financial interest in one treatment option may diminish the role that other criteria—such as life expectancy, overall health, patient preferences, and clinical characteristics of the prostate cancer—play in the decision-making process. Despite the fact that several treatment options are often considered equally appropriate, the higher use of IMRT among providers who self-refer seems problematic because prostate cancer treatments differ in terms of their risks and side effects, such as the likelihood of developing sexual, urinary, or bowel-related side effects. To the extent that providers’ financial interests are shaping treatment decisions, some patients may end up on a treatment course that does not best meet their individual needs. Second, because IMRT costs more than most other treatments, the higher use of IMRT by self-referring providers results in higher costs for Medicare and beneficiaries. To the extent that treatment decisions are driven by providers’ financial interest and not by patient preference, these increased costs are difficult to justify. Given self-referral’s potential effect on both the Medicare program and beneficiaries, it is imperative that CMS improve its ability to identify and monitor the effects of such services. CMS is not currently well-positioned to address self-referring providers’ financial incentive to refer their prostate cancer patients for IMRT, as CMS currently does not have a method for easily identifying such services. Without a way to identify self- referred services, such as a self-referral flag on Medicare Part B claims, CMS does not have the ongoing ability to monitor self-referral and its effects on beneficiary treatment selection and costs to both Medicare and beneficiaries. In addition, Medicare providers who self-refer IMRT services are generally not required to disclose their financial interest in IMRT. Thus, beneficiaries may not be aware that their provider has an incentive to recommend IMRT over alternative treatments which may be equally effective, have different risks and side effects, and are less expensive for Medicare and beneficiaries. Beneficiaries need to select among different prostate cancer treatment options, and beneficiary knowledge of a referring provider’s financial interest in IMRT may be an important consideration in making these selections. Currently, the Department of Health and Human Services (HHS), the agency that administers CMS, lacks the authority to establish a disclosure protocol for providers who self-refer IMRT services. To increase beneficiaries’ awareness of providers’ financial interest in a particular treatment, Congress should consider directing the Secretary of Health and Human Services to require providers who self-refer IMRT services to disclose to their patients that they have a financial interest in the service. We recommend that the Administrator of CMS insert a self-referral flag on its Medicare Part B claims form, require providers to indicate whether the IMRT service for which a provider bills Medicare is self-referred, and monitor the effects that self-referral has on costs and beneficiary treatment selection. We provided a draft of this report to HHS for comment. HHS provided written comments, which are reprinted in appendix VI. We also obtained oral comments from representatives of three professional associations selected because they represent stakeholders with specific involvement in prostate cancer–related IMRT services. The three associations were the American Society for Radiation Oncology (ASTRO), which represents radiation oncologists; the American Urological Association (AUA), which represents urologists; and the Large Urology Group Practice Association (LUGPA), which represents large urology group practices. We summarize and respond to comments from HHS and representatives from the three professional associations in the following sections. In its comments, which are reprinted in appendix VI, HHS stated that it did not concur with our recommendation. HHS did not comment on the matter for congressional consideration or the main finding of the report—that self-referring providers, particularly those belonging to limited-specialty groups, referred a substantially higher percentage of their prostate cancer patients for IMRT. HHS did not concur with our recommendation that CMS insert a self- referral flag on its Medicare Part B claims form, require providers to indicate whether the IMRT service for which a provider bills Medicare is self-referred, and monitor the effects that self-referral has on costs and beneficiary treatment selection. HHS stated that flagging self-referred services and tracking their effects would not address overutilization that occurs as a result of self-referral, would be complex to administer, and may have unintended consequences, which HHS did not delineate. In addition, HHS stated that the President’s fiscal year 2014 budget proposal includes a provision to exclude certain services from the in-office ancillary services (IOAS) exception. To the extent that self-referral for IMRT services continues to be permitted, we believe that including an indicator or flag on the claims would be an effective way to identify and track self-referral and would give CMS the ability to analyze the effects of self-referral on utilization patterns. Furthermore, we do not believe an indicator or flag on the claims would be complex to administer, as CMS requires providers to use similar indicators to provide additional information about certain other services. On the basis of HHS’s written response to our report, we are concerned that HHS does not appear to recognize the effects IMRT self-referral can have on beneficiaries and the Medicare program. HHS did not comment on our matter for congressional consideration or our key finding that self- referring providers, particularly those belonging to limited specialty groups, referred a substantially higher percentage of their prostate cancer patients for IMRT. Given the magnitude of these findings, we continue to believe that CMS should take steps to monitor the impact that IMRT self- referral has on costs and treatment selection. HHS also provided technical comments that we incorporated as appropriate. ASTRO representatives generally agreed with our findings but thought our recommendation and matter for congressional consideration should be stronger. They said we should recommend that Congress close the IOAS exception because the findings from the report, in combination with previous self-referral research we and others have published, indicate the necessity for such an action. An examination of the IOAS was beyond the scope of our work. To the extent that IMRT self-referral is still permissible, ASTRO representatives also said that inserting a self-referral flag would not be an effective way to identify self-referral. Instead, they suggested implementing reporting requirements similar to the financial transparency requirements for physician-owned specialty hospitals under PPACA and requiring self-referring providers to indicate on their Medicare provider enrollment forms their financial interest in referrals. Further, ASTRO representatives said that self-referring providers should be required to notify patients that they may receive IMRT at alternative locations and that other treatment options are available. We continue to believe that inserting a self-referral flag on Medicare Part B claims would be an effective way to track and monitor self-referral and that beneficiary awareness of their providers’ financial interests is important. However, to the extent that other strategies exist that would allow CMS to increase beneficiary awareness and monitor self-referral, such efforts would be consistent with the intent of our recommendation and matter for congressional consideration. AUA representatives said we did not have sufficient evidence to link financial incentives to the increase in IMRT use among self-referring providers and disagreed with our conclusion that financial incentives for self-referring providers belonging to limited specialty groups were likely a major factor driving the increase in the percentage of prostate cancer patients referred for IMRT. Specifically, AUA representatives said the flat trend in the utilization of prostate cancer-related IMRT services from 2007 through 2010 indicates utilization has simply shifted from hospital outpatient departments to physician offices and that this trend undermines our conclusion that financial incentives increase IMRT use. As explained in our report, the trend in the percentage of patients newly diagnosed with prostate cancer referred for IMRT was not flat; instead, it increased over the study period. This increase occurred while the utilization of IMRT services remained about the same in part because the annual number of Medicare FFS beneficiaries who were diagnosed with prostate cancer declined by about 20 percent over our study period. In addition, we found that self-referring providers, which were predominantly from limited-specialty groups, referred a higher percentage of their Medicare FFS patients for IMRT than did other providers and that their higher IMRT referral rate could not be explained by differences in age, geographic location, or beneficiary health. As a result, we continue to believe that financial incentives were likely a major factor driving the higher IMRT referral rate of self-referring providers from limited-specialty groups. AUA representatives had several other critiques of our report. Specifically, they indicated that we did not put enough emphasis on the patient’s role in choosing a treatment and expressed concern that we did not include more clinical information on patients’ prostate cancer, such as information on cancer stage and grade, or include Medicare Advantage beneficiaries in our study population. We address two of these critiques in the report. Specifically, we note that patient preference is one of many factors that affect a beneficiary’s treatment decision, and we include clinical information on patients’ prostate cancer for a subset of beneficiaries from New York. However, we did not include Medicare Advantage beneficiaries in our study population because Medicare Advantage plans are not required to submit claims to CMS, and, thus, we do not have detailed information on the services Medicare Advantage beneficiaries receive or the providers who refer and perform those services. Finally, AUA representatives stated that the declining percentage of self- referring providers’ patients referred for brachytherapy from 2007 to 2009 could reflect a change in practice standards, as they said brachytherapy is no longer recommended as a sole treatment for intermediate- and high- risk prostate cancer. While we note that brachytherapy use has declined even among providers who do not self-refer, we do not believe that changing guidelines or the possibility of differences in guideline adherence between non-self-referring and self-referring providers could explain in totality why self-referring providers refer a smaller percentage of their patients for brachytherapy. First, self-referring providers referred a substantially lower percentage of their patients for brachytherapy, even after accounting for the decline in brachytherapy use for both non-self- referring and self-referring providers from 2007 to 2009. Second, among those patients for whom we had clinical data, the biggest differences in IMRT and brachytherapy use between self-referring and non-self-referring providers were for patients with low-risk cancer, which would not be affected by the change in practice guidelines for intermediate- and high- risk prostate cancer the AUA representatives referenced. LUGPA representatives disagreed with our conclusion that financial incentives for self-referring providers—specifically those in limited- specialty groups—were likely a major factor driving the increase in the percentage of prostate cancer patients referred for IMRT. Instead, they said patient preference and an increase in the number of self-referring providers explain the increase in IMRT utilization by self-referring providers. While we did not perform our trend analysis at the provider level, we do note in the report that the number of self-referring groups increased substantially over our study period. This corresponds with a shift in the location where patients received IMRT, from hospital outpatient departments to physician offices. However, these trends that we note do not negate our analysis of the referral patterns of self-referring providers. Specifically, self-referring providers who belonged to a limited- specialty group referred a higher percentage of their newly diagnosed prostate cancer patients for IMRT, and, thus, the increased number of self-referring providers has also resulted in a higher percentage of patients receiving IMRT. Also, LUPGA representatives said the increase in the percentage of self-referring providers’ patients referred for IMRT could be due to such patients more frequently consulting with radiation oncologists before initiating treatment, which one study indicated leads to higher utilization of radiation therapy, defined as EBRT or brachytherapy. We believe it is unlikely that access to a radiation oncologist drove the differences in IMRT referrals between self-referring and non-self-referring providers because self-referring providers who belonged to a multispecialty group referred a substantially lower percentage of their patients for IMRT compared to self-referring providers who belonged to a limited-specialty group, despite the likelihood that patients in both instances had access to a radiation oncologist within the group practice. LUGPA raised several other points of concern about our review. First, LUGPA representatives said our assertion that IMRT, brachytherapy, and a prostatectomy are clinically equivalent treatments is inappropriate. We disagree with LUGPA’s characterization of our discussion of IMRT, brachytherapy, and a prostatectomy as treatment options. We recognize that these treatments are not equally appropriate for all men diagnosed with prostate cancer and do not assert that in our report. Rather, we say that these treatments are often—not always—considered equally appropriate and give an example of when they are considered equally appropriate—men with low-risk prostate cancer. We also recognize that, for any particular patient, a given treatment might not be appropriate due to considerations such as age and comorbidities. Second, LUGPA representatives said that we did not acknowledge that all sites of services have essentially identical financial incentives to perform services for which they receive compensation. They said our work showed the percentage of newly diagnosed prostate cancer patients referred for active surveillance was nearly equal between self-referring and non-self- referring providers and that this was evidence that self-referring providers treat patients based on patient choice and sound clinical decision making. We disagree with LUGPA's assertion that the percentage of newly diagnosed prostate cancer patients referred for active surveillance was nearly equal between self-referring and non-self-referring providers, as self-referring providers were approximately 8 percent less likely to refer their patients for active surveillance than were non-self-referring providers. As we note in the report, IMRT is more costly than other treatments for prostate cancer, resulting in a financial incentive for self- referring providers to refer their patients for IMRT over other treatments. We found that self-referring providers referred a higher percentage of their patients for IMRT than did non-self-referring providers and that the difference in IMRT referral rates could not be explained by variations in patient age, geographic location, or patient health status. As a result, we continue to believe that self-referring providers’ higher IMRT referral rates are driven by a financial incentive for these providers to refer newly diagnosed prostate cancer patients for IMRT. Third, LUGPA representatives said we should have studied the use of IMRT for conditions other than prostate cancer. The use of IMRT to treat other conditions was outside the scope of our work. Finally, LUGPA representatives indicated that our estimates of 3D-CRT utilization for newly diagnosed prostate cancer patients are too low. We believe our calculation of the percentage of patients who were newly diagnosed with prostate cancer in 2009 and referred for 3D-CRT is accurate. We solicited input from multiple physician associations, including members of LUGPA, regarding the appropriate HCPCS codes to use to track 3D-CRT and examined 100 percent of claims from the Medicare Carrier and hospital outpatient department files to identify all 3D-CRT services received by newly diagnosed prostate cancer patients. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services, interested congressional committees, and others. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff has any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. This section describes the scope and methodology used to analyze our two objectives: (1) comparing trends in the number of and expenditures for Medicare prostate cancer–related intensity-modulated radiation therapy (IMRT) services provided by self-referring and non-self-referring groups from 2006 through 2010 and (2) examining how the percentage of Medicare prostate cancer patients referred for IMRT may differ on the basis of whether providers self-refer. To compare trends in the number of and expenditures for prostate cancer–related IMRT services provided in physician offices or hospital outpatient departments from 2006 through 2010, we analyzed IMRT claims from the Medicare Part B Carrier and hospital outpatient files. We identified IMRT services on the basis of Healthcare Common Procedure Coding System (HCPCS) codes associated with the delivery of IMRT— 77418 and 0073T. We classified IMRT services as related to prostate cancer if the principal diagnosis code was 185 or 233.4—malignant neoplasm of the prostate or carcinoma in situ of prostate, respectively—or if one of these codes was billed on an IMRT claim and no other diagnosis code related to another cancer was billed on the same claim. To determine whether prostate cancer–related IMRT services from 2006 through 2010 were performed by self-referring or non-self-referring provider groups, we first limited our analysis to only those IMRT services in the Medicare Part B Carrier file. Because there is no indicator or “flag” on the claim that identifies whether services are self-referred or non-self- referred and the Centers for Medicare & Medicaid Services (CMS), the agency that administers Medicare, has no other method for identifying whether a service was self-referred, we developed a claims-based methodology for identifying provider group practices as self-referring or non-self-referring. We classified groups, identified by taxpayer identification numbers (TIN)—an identification number used by the Internal Revenue Service—as self-referring in a given year if: (1) we could identify a prostate biopsy for at least 50 percent of the prostate cancer–related IMRT episodes provided by groups, (2) at least 50 percent of these episodes were self-referred, and (3) a group had a minimum number of 10 self-referred IMRT episodes. The remaining groups were considered non-self-referring. To ensure that how we defined our criteria were reliable, we tested alternative thresholds for defining self-referring groups and found that, regardless of specification, the rapid growth of services performed by self-referring groups persisted and that the growth was due to limited-specialty groups. A patient’s episode of prostate cancer–related IMRT was considered self-referred if the provider who performed his prostate biopsy and the performing provider(s) on the IMRT claim(s) billed to the same TIN in the year(s) the IMRT services were performed, the year the biopsy was performed, or the To find prostate biopsies for beneficiaries, year between, if applicable.we searched through 2 years of their claims history to find the prostate biopsy nearest to, but not after, the date of their first IMRT service. If a beneficiary received multiple episodes of IMRT from 2006 through 2010, we searched back 2 years from the date of the first IMRT service for each episode. We further defined self-referring provider groups as either limited-specialty or multispecialty groups. We defined groups as limited specialty if more than 75 percent of its office visits in a given year were performed by urologists, nonphysician practitioners, or physicians whose specialty was related to the diagnosis or treatment of cancer, such as radiation oncologists. The remaining self-referring groups were comprised of providers from a large number of different specialties and were considered multispecialty groups. To examine how the percentage of prostate cancer patients referred for IMRT may differ on the basis of whether providers self-refer, we first identified a list of Medicare beneficiaries who were newly diagnosed with prostate cancer in 2007 or 2009. We used a Medicare claims-derived date from the Chronic Condition Data Warehouse (CCDW), a CMS database, that indicates the first occurrence of prostate cancer as a proxy for the date on which a beneficiary was diagnosed with prostate cancer. We further narrowed the list of prostate cancer patients we studied to those who (1) were at least 66 years of age on their date of diagnosis, (2) were continuously enrolled in Medicare Parts A and B in the year of, before, and after they were diagnosed, and (3) received a prostate biopsy on the same day as or within 1 year prior to their diagnosis. We then analyzed prostate cancer–related claims from the Medicare Part B Carrier and hospital outpatient files to determine what types of treatments these beneficiaries received from their diagnosis date through 1 year after that date. We used the provider who performed a beneficiary’s prostate biopsy that was nearest to his date of diagnosis as a proxy for the provider who referred the beneficiary for treatment. We classified referring providers as self-referring if they were the performing provider on a claim that was paid to a self-referring provider group in the year of, before, or after a beneficiary’s prostate cancer diagnosis. All other providers were considered non-self-referring. Similarly, we classified providers as belonging to a limited-specialty group if they were the performing provider on a claim that was paid to a limited-specialty provider group in the year of, before, or after a beneficiary’s prostate cancer diagnosis. If a provider did not belong to a limited-specialty group, we considered the provider to belong to a multispecialty group. To assess the possibility that beneficiary characteristics affected the types of treatments for which self-referring and non-self-referring providers referred their prostate cancer patients, we examined beneficiaries’ (1) age at the time they were diagnosed with prostate cancer, (2) geographic location (i.e., urban or rural), and (3) health, including clinical characteristics of prostate cancers for a subset of beneficiaries who lived in New York. We determined a beneficiary’s age at diagnosis using a beneficiary’s date of birth and the date on which he was diagnosed with prostate cancer. We defined urban settings as metropolitan statistical areas, a geographic entity defined by the Office of Management and Budget as a core urban area of 50,000 or more population. We used rural-urban commuting area codes—a Census tract– based classification scheme that utilizes the standard Bureau of Census Urbanized Area and Urban Cluster definitions in combination with work- commuting information to characterize all of the nation’s Census tracts regarding their rural and urban status—to identify beneficiaries as living in metropolitan statistical areas. We considered all other settings to be rural. We used CMS’s risk score file to identify average risk score, which serves as a proxy for beneficiary health status. For a subset of beneficiaries who lived in New York, we obtained clinical information on the beneficiaries’ prostate cancer—including information used to determine whether the localized cancer was low, intermediate, or high risk—from the New York State Cancer Registry.prostate cancer was low, intermediate, or high risk, we used a beneficiary’s Gleason score, prostate-specific antigen (PSA), and tumor To establish whether a stage from the New York State Cancer Registry.York analysis are not generalizable to the entire Medicare population. The results of the New We also determined whether the percentage of a provider’s prostate cancer patients referred for IMRT changed after providers began to self- refer. Specifically, we identified a group of providers, which we called “switchers,” that did not self-refer in 2006 or 2007 but began to self-refer in either 2008 or 2009. We then calculated the change in the percentage of switchers’ patients referred for IMRT and other treatments among those diagnosed with prostate cancer in 2007 and 2009. We then compared the change among switchers to the change experienced by providers that did not change whether or not they self-referred IMRT services from 2007 to 2009. Specifically, we compared the change in the percentage of switchers’ prostate cancer patients they referred for IMRT to the percentage of patients referred for IMRT by (1) self-referring providers—providers that self-referred in 2007, 2008, and 2009 and either self-referred or did not bill Medicare in 2006 and 2010 and (2) non-self- referring providers—providers that did not self-refer in 2007, 2008, and 2009 and either did not self-refer or did not bill Medicare in 2006 and 2010. We took several steps to ensure that the data used to produce this report were sufficiently reliable. Specifically, we assessed the reliability of the CMS data we used by interviewing officials responsible for overseeing these data sources, including CMS and Medicare contractor officials. We also reviewed relevant documentation and examined the data for obvious errors, such as missing values and values outside of expected ranges. We determined that the data were sufficiently reliable for the purposes of our study. We conducted this performance audit from May 2010 through July 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Medicare prostate cancer–related intensity-modulated radiation therapy (IMRT) utilization varied substantially between settings (see fig. 4). From 2006 through 2010, utilization grew at an annual rate of 10 percent in physician offices, whereas there was almost no growth in the hospital outpatient department. Moreover, while the utilization of prostate cancer– related IMRT services in the hospital outpatient department was nearly the same in 2006 as it was in 2010, utilization in this setting actually peaked in 2007 and declined thereafter. Total prostate cancer–related IMRT expenditures grew from $589 million to $698 million over our study period, but growth rates varied by setting (see fig. 5). In contrast to the growth in utilization, expenditures increased faster for services performed in hospital outpatient departments than those performed in physician offices—7 percent and 3 percent annual growth rates, respectively. This is due to the fact that reimbursement rates for IMRT services have been increasing for services performed in hospital outpatient departments and declining for those performed in physician offices. The higher percentage of patients that self-referring Medicare providers referred for intensity-modulated radiation therapy (IMRT) compared to non-self-referring providers was due to self-referring providers referring their patients for IMRT only and IMRT in conjunction with hormone therapy more often (see table 4). Including all combinations, self-referring and non-self-referring providers referred nearly equal percentages of their patients for a combination of treatments—27 percent and 26 percent, respectively. While self-referring Medicare providers were more likely to refer their prostate cancer patients for intensity-modulated radiation therapy (IMRT) regardless of age, the type of treatment they were less likely to refer their patients for varied based on the age of the beneficiary (see table 5). For instance, among beneficiaries 80 years of age or older at the time they were diagnosed with prostate cancer, self-referring providers were less likely to refer their prostate cancer patients for hormone therapy only, active surveillance, and brachytherapy compared to non-self-referring providers. In contrast, among beneficiaries 66 to 69 years old, nearly the entire difference between self-referring and non-self-referring providers was due to self-referring providers referring a smaller percentage of their prostate cancer patients for a radical prostatectomy or brachytherapy. The increased percentage of Medicare patients referred by switchers for intensity-modulated radiation therapy (IMRT) was accompanied by a decrease in the percentage of patients referred for several other treatments, especially brachytherapy (see table 6). Some of the changes in the percentage of patients referred by switchers for a given treatment were consistent with the patterns for other types of providers—such as in the case of three-dimensional conformal radiation therapy (3D-CRT) / other external beam radiation therapy (EBRT)—while some of the other changes were not. In addition to the contact named above, Thomas Walke, Assistant Director; Manuel Buentello; Krister Friday; Gregory Giusto; Brian O’Donnell; Daniel Ries; and Jennifer Whitworth made key contributions to this report.
Questions have been raised about self-referral's role in Medicare Part B expenditures' rapid growth. Self-referral occurs when a provider refers patients to entities in which the provider or the provider's family members have a financial interest. Services that can be self-referred under certain circumstances include IMRT, a common and costly treatment for prostate cancer. GAO was asked to examine Medicare self-referral trends among radiation oncology services. This report examines (1) trends in the number of and expenditures for prostate cancer-related IMRT services provided by self-referring and non-self-referring provider groups from 2006 through 2010 and (2) how the percentage of prostate cancer patients referred for IMRT may differ on the basis of whether providers self-refer. GAO analyzed Medicare Part B claims and developed a claims-based methodology to identify self-referring groups and providers. GAO also interviewed officials from the Centers for Medicare & Medicaid Services (CMS), which administers Medicare, and other stakeholders. The number of Medicare prostate cancer-related intensity-modulated radiation therapy (IMRT) services performed by self-referring groups increased rapidly, while declining for non-self-referring groups from 2006 to 2010. Over this period, the number of prostate cancer-related IMRT services performed by self-referring groups increased from about 80,000 to 366,000. Consistent with that growth, expenditures associated with these services and the number of self-referring groups also increased. The growth in services performed by self-referring groups was due entirely to limited-specialty groups--groups comprised of urologists and a small number of other specialties--rather than multispecialty groups. Providers substantially increased the percentage of their prostate cancer patients they referred for IMRT after they began to self-refer. Providers that began self-referring in 2008 or 2009--referred to as switchers--referred 54 percent of their patients who were diagnosed with prostate cancer in 2009 for IMRT, compared to 37 percent of their patients diagnosed in 2007. In contrast, providers who did not begin to self-refer--that is, non-self-referrers and providers who self-referred the entire period--experienced much smaller changes over the same period. Among all providers who referred a Medicare beneficiary diagnosed with prostate cancer in 2009, those that self-referred were 53 percent more likely to refer their patients for IMRT and less likely to refer them for other treatments, especially a radical prostatectomy or brachytherapy. Compared to IMRT, those treatments are less costly and often considered equally appropriate but have different risks and side effects. Factors such as age, geographic location, and patient health did not explain the large differences between self-referring and non-self-referring providers. These analyses suggest that financial incentives for self-referring providers--specifically those in limited specialty groups--were likely a major factor driving the increase in the percentage of prostate cancer patients referred for IMRT. Medicare providers are generally not required to disclose that they self-refer IMRT services, and the Department of Health and Human Services (HHS) lacks the authority to establish such a requirement. Thus, beneficiaries may not be aware that their provider has a financial interest in recommending IMRT over alternative treatments that may be equally effective, have different risks and side effects, and are less expensive for Medicare and beneficiaries. Congress should consider directing the Secretary of Health and Human Services, whose agency oversees CMS, to require providers to disclose their financial interests in IMRT to their patients. GAO also recommends that CMS identify and monitor self-referral of IMRT services. HHS disagreed with GAO's recommendation. Given the magnitude of GAO's findings, GAO maintains CMS should identify and monitor self-referral of IMRT services.
Without proper safeguards, computer systems are vulnerable to individuals and groups with malicious intentions who can intrude and use their access to obtain and manipulate sensitive information, commit fraud, disrupt operations, or launch attacks against other computer systems and networks. The risks to federal systems are well-founded for a number of reasons, including the dramatic increase in reports of security incidents, the ease of obtaining and using hacking tools, and steady advances in the sophistication and effectiveness of attack technology. Recognizing the importance of securing federal systems and data, Congress passed FISMA in 2002. The act sets forth a comprehensive framework for ensuring the effectiveness of information security controls over information resources that support federal operations and assets. FISMA’s framework creates a cycle of risk management activities necessary for an effective security program; these activities are similar to the principles noted in our study of the risk management activities of leading private-sector organizations—assessing risk, establishing a central management focal point, implementing appropriate policies and procedures, promoting awareness, and monitoring and evaluating policy and control effectiveness. In order to ensure the implementation of this framework, the act assigns specific responsibilities to agency heads, chief information officers, inspectors general, and NIST. It also assigns responsibilities to OMB that include developing and overseeing the implementation of policies, principles, standards, and guidelines on information security, and reviewing agency information security programs, at least annually, and approving or disapproving them. FISMA requires each agency, including agencies with national security systems, to develop, document, and implement an agencywide information security program to provide security for the information and information systems that support the operations and assets of the agency, including those provided or managed by another agency, contractor, or other source. Specifically, FISMA requires information security programs to include, among other things: periodic assessments of the risk and magnitude of harm that could result from the unauthorized access, use, disclosure, disruption, modification, or destruction of information or information systems; risk-based policies and procedures that cost-effectively reduce information security risks to an acceptable level and ensure that information security is addressed throughout the life cycle of each information system; subordinate plans for providing adequate information security for networks, facilities, and systems or groups of information systems, as appropriate; security awareness training for agency personnel, including contractors and other users of information systems that support the operations and assets of the agency; periodic testing and evaluation of the effectiveness of information security policies, procedures, and practices, performed with a frequency depending on risk, but no less than annually, and that includes testing of management, operational, and technical controls for every system identified in the agency’s required inventory of major information systems; a process for planning, implementing, evaluating, and documenting remedial actions to address any deficiencies in the information security policies, procedures, and practices of the agency; procedures for detecting, reporting, and responding to security incidents; plans and procedures to ensure continuity of operations for information systems that support the operations and assets of the agency. In addition, agencies must produce an annually updated inventory of major information systems (including major national security systems) operated by the agency or under its control, which includes an identification of the interfaces between each system and all other systems or networks, including those not operated by or under the control of the agency. FISMA also requires each agency to report annually to OMB, selected congressional committees, and the Comptroller General on the adequacy of its information security policies, procedures, practices, and compliance with requirements. In addition, agency heads are required to report annually the results of their independent evaluations to OMB, except to the extent that an evaluation pertains to a national security system; then only a summary and assessment of that portion of the evaluation needs to be reported to OMB. Under FISMA, NIST is tasked with developing, for systems other than national security systems, standards and guidelines that must include, at a minimum (1) standards to be used by all agencies to categorize all their information and information systems based on the objectives of providing appropriate levels of information security, according to a range of risk levels; (2) guidelines recommending the types of information and information systems to be included in each category; and (3) minimum information security requirements for information and information systems in each category. NIST must also develop a definition of and guidelines for detection and handling of information security incidents as well as guidelines developed in conjunction with the Department of Defense and the National Security Agency for identifying an information system as a national security system. The law also assigns other information security functions to NIST, including: providing technical assistance to agencies on elements such as compliance with the standards and guidelines and the detection and handling of information security incidents; evaluating private-sector information security policies and practices and commercially available information technologies to assess potential application by agencies; evaluating security policies and practices developed for national security systems to assess their potential application by agencies; and conducting research, as needed, to determine the nature and extent of information security vulnerabilities and techniques for providing cost- effective information security. As required by FISMA, NIST has prepared its annual public report on activities undertaken in the previous year and planned for the coming year. In addition, NIST’s FISMA initiative supports the development of a program for credentialing public and private sector organizations to provide security assessment services for federal agencies. Under FISMA, the inspector general for each agency shall perform an independent annual evaluation of the agency’s information security program and practices. The evaluation should include testing of the effectiveness of information security policies, procedures, and practices of a representative subset of agency systems. In addition, the evaluation must include an assessment of the compliance with the act and any related information security policies, procedures, standards, and guidelines. For agencies without an inspector general, evaluations of non-national security systems must be performed by an independent external auditor. Evaluations related to national security systems are to be performed by an entity designated by the agency head. FISMA states that the Director of OMB shall oversee agency information security policies and practices, including: developing and overseeing the implementation of policies, principles, standards, and guidelines on information security; requiring agencies to identify and provide information security protections commensurate with risk and magnitude of the harm resulting from the unauthorized access, use, disclosure, disruption, modification, or destruction of information collected or maintained by or on behalf of an agency, or information systems used or operated by an agency, or by a contractor of an agency, or other organization on behalf of an agency; overseeing agency compliance with FISMA to enforce accountability; and reviewing at least annually, and approving or disapproving, agency information security programs. In addition, the act requires that OMB report to Congress no later than March 1 of each year on agency compliance with FISMA. Significant weaknesses in information security policies and practices threaten the confidentiality, integrity, and availability of critical information and information systems used to support the operations, assets, and personnel of most federal agencies. These persistent weaknesses expose sensitive data to significant risk, as illustrated by recent incidents at various agencies. Further, our work and reviews by inspectors general note significant information security control deficiencies that place a broad array of federal operations and assets at risk. Consequently, we have made hundreds of recommendations to agencies to address these security control deficiencies. Since our report in July 2007, federal agencies have reported a spate of security incidents that have put sensitive data at risk, thereby exposing the personal information of millions of Americans to the loss of privacy and potential harm associated with identity theft. Agencies have experienced a wide range of incidents involving data loss or theft, computer intrusions, and privacy breaches, underscoring the need for improved security practices. The following examples, reported in 2008 and 2009, illustrate that a broad array of federal information and assets remain at risk. In May 2009, the Department of Transportation Inspector General issued the results of an audit of Web applications security and intrusion detection in air traffic control systems at the Federal Aviation Administration (FAA). The inspector general reported that Web applications used in supporting air traffic control systems operations were not properly secured to prevent attacks or unauthorized access. To illustrate, vulnerabilities found in Web application computers associated with the Traffic Flow Management Infrastructure System, Juneau Aviation Weather System, and the Albuquerque Air Traffic Control Tower allowed audit staff to gain unauthorized access to data stored on these computers, including program source code and sensitive personally identifiable information. In addition, the inspector general reported that it found a vulnerability on FAA Web applications that could allow attackers to execute malicious codes on FAA users’ computers, which was similar to an actual incident that occurred in August 2008. In February 2009, the FAA notified employees that an agency computer had been illegally accessed and employee personal identity information had been stolen electronically. Two of the 48 files on the breached computer server contained personal information about more than 45,000 FAA employees and retirees who were on the FAA payrolls as of the first week of February 2006. Law enforcement agencies were notified and are investigating the data theft. In March 2009, U.S. Congressman Jason Altmire and U.S. Senator Bob Casey announced that that they had sent a letter to the Under Secretary of Defense for Acquisition, Technology, and Logistics, asking for additional information on a recent security breach of the presidential helicopter, Marine One. According to the announcement, in February 2009, a company based in Cranberry, Pennsylvania, discovered that engineering and communications documents containing key details about the Marine One fleet had been downloaded to an Internet Protocol (IP) address in Iran. The documents were traced back to a defense contractor in Maryland, where an employee most likely downloaded a file-sharing program that inadvertently allowed others to access this information. According to information from the Congressman’s Web site, recent reports have said that the federal government was warned last June that an Internet Web site with an IP address traced to Iran was actively seeking this information. In March 2009, the United States Computer Emergency Readiness Team (US-CERT) issued an updated notice to warn agencies and organizations of the Conficker/Downadup worm activity and to help prevent further compromises from occurring. In the notice, US-CERT warned that the Conficker/Downadup worm could infect a Microsoft Windows system from a thumb drive, a network share, or directly across a network if the host is not patched. According to a March 2009 media release from Senator Bill Nelson’s office, cyber-invaders thought to be in China hacked into the computer network in Senator Nelson’s office. There were two attacks on the same day in March 2009, and another one in February 2009 that targeted work stations used by three of Senator Nelson’s staffers. The hackers were not able to take any classified information because that information is not kept on office computers, a spokesman said. The media release stated that similar incursions into computer networks in Congress were up significantly in the past few months. The Department of Energy’s Office of Health, Safety, and Security announced that a password-protected compact disk (CD) had been lost during a routine shipment on January 28, 2009. The CD contained personally identifiable information for 59,617 individuals who currently work or formerly worked at facilities at the Department of Energy’s Idaho site. The investigation verified that protection measures had been applied in accordance with requirements applicable to organizations working under cooperative agreements and surmised that while the CD had been lost for 8 weeks at the time of the investigation, no evidence had been found that revealed that the personal information on the lost disk had been compromised. The investigation concluded that OMB and Department of Energy requirements for managing and reporting the loss of the information had not been transmitted to the appropriate organizations and that there was a failure to provide timely notifications of the actual or suspected loss of information in this incident. In January 2009, the Program Director of the Office of Personnel and Management’s USAJOBS Web site announced that their technology provider’s (Monster.com) database had been illegally accessed and contact and account data had been taken, including user IDs and passwords, e-mail addresses, names, phone numbers, and some basic demographic data. The director pointed out that e-mail could be used for phishing activity and advised users to change their site login password. In December 2008, the Federal Emergency Management Administration was alerted to an unauthorized breach of private information when an applicant notified it that his personal information pertaining to Hurricane Katrina had been posted on the Internet. The information posted to Web sites contained a spreadsheet with 16,857 lines of data that included applicant names, social security numbers, addresses, telephone numbers, e-mail addresses, and other information on disaster applicants who had evacuated to Texas. According to the Federal Emergency Management Administration, it took action to work with the Web site hosting the private information, and have that information removed from public view. Additionally, the agency reported that it worked to remove the same information from a second Web site. Further, the agency stated that while it believed most of the applicant information posted on the Web sites were properly released by them to a state agency, it did not authorize the subsequent public posting of much of this data. In June 2008, the Walter Reed Army Medical Center reported that officials were investigating the possible disclosure of personally identifiable information through unauthorized sharing of a data file containing the names of approximately 1,000 Military Health System beneficiaries. Walter Reed officials were notified of the possible exposure on May 21 by an outside company. Preliminary results of an ongoing investigation identified a computer from which the data had apparently been compromised. Data security personnel from Walter Reed and the Department of the Army think it is possible that individuals named in the file could become victims of identity theft. The compromised data file did not include protected health information such as medical records, diagnosis, or prognosis for patients. In March 2008, media reports surfaced noting that the passport files of three U.S. senators, who were also presidential candidates, had been improperly accessed by Department of State employees and contractor staff. As of April 2008, the system contained records on about 192 million passports for about 127 million passport holders. These records included personally identifiable information, such as the applicant’s name, gender, social security number, date and place of birth, and passport number. In July 2008, after investigating this incident, the Department of State’s Office of Inspector General reported many control weaknesses—including a general lack of policies, procedures, guidance, and training—relating to the prevention and detection of unauthorized access to passport and applicant information and the subsequent response and disciplinary processes when a potential unauthorized access is substantiated. When incidents occur, agencies are to notify the federal information security incident center—US-CERT. As shown in figure 1, the number of incidents reported by federal agencies to US-CERT has risen dramatically over the past 3 years, increasing from 5,503 incidents reported in fiscal year 2006 to 16,843 incidents in fiscal year 2008 (slightly more than 200 percent). Agencies report the following types of incidents based on US-CERT- defined categories: Unauthorized access: Gaining logical or physical access without permission to a federal agency’s network, system, application, data, or other resource. Denial of service: Preventing or impairing the normal authorized functionality of networks, systems, or applications by exhausting resources. This activity includes being the victim of or participating in a denial of service attack. Malicious code: Installing malicious software (e.g., virus, worm, Trojan horse, or other code-based malicious entity) that infects an operating system or application. Agencies are not required to report malicious logic that has been successfully quarantined by antivirus software. Improper usage: Violating acceptable computing use policies. Scans/probes/attempted access: Accessing or identifying a federal agency computer, open ports, protocols, service, or any combination of these for later exploit. This activity does not directly result in a compromise or denial of service. Under investigation: Investigating unconfirmed incidents that are potentially malicious, or anomalous activity deemed by the reporting entity to warrant further review. As noted in figure 2, the three most prevalent types of incidents reported to US-CERT during fiscal years 2006 through 2008 were unauthorized access, improper usage, and investigation (see fig. 2). Reviews at federal agencies continue to highlight deficiencies in their implementation of security policies and procedures. In their fiscal year 2008 performance and accountability reports, 20 of the 24 agencies indicated that inadequate information security controls were either a material weakness or a significant deficiency (see fig. 3). Similarly, in annual reports required under 31 U.S.C. § 3512 (commonly referred to as the Federal Managers’ Financial Integrity Act of 1982), 11 of 24 agencies identified material weaknesses in information security. Inspectors general have also noted weaknesses in information security, with 22 of 24 identifying it as a “major management challenge” for their agency. Similarly, our audits have identified control deficiencies in both financial and nonfinancial systems, including vulnerabilities in critical federal systems. For example: In 2009, we reported that security weaknesses at the Securities and Exchange Commission continued to jeopardize the confidentiality, integrity, and availability of the commission’s financial and sensitive information and information systems. Although the commission had made progress in correcting previously reported information security control weaknesses, it had not completed action to correct 16 weaknesses. In addition, we identified 23 new weaknesses in controls intended to restrict access to data and systems. Thus, the commission had not fully implemented effective controls to prevent, limit, or detect unauthorized access to computing resources. For example, it had not always (1) consistently enforced strong controls for identifying and authenticating users, (2) sufficiently restricted user access to systems, (3) encrypted network services, (4) audited and monitored security-relevant events for its databases, and (5) physically protected its computer resources. The Securities and Exchange Commission also had not consistently ensured appropriate segregation of incompatible duties or adequately managed the configuration of its financial information systems. As a result, the Securities and Exchange Commission was at increased risk of unauthorized access to and disclosure, modification, or destruction of its financial information, as well as inadvertent or deliberate disruption of its financial systems, operations, and services. The Securities and Exchange Commission agreed with our recommendations and stated that it plans to address the identified weaknesses. In 2009, we reported that the Internal Revenue Service had made progress toward correcting prior information security weaknesses, but continued to have weaknesses that could jeopardize the confidentiality, integrity, and availability of financial and sensitive taxpayer information. These deficiencies included some related to controls that are intended to prevent, limit, and detect unauthorized access to computing resources, programs, information, and facilities, as well as a control important in mitigating software vulnerability risks. For example, the agency continued to, among other things, allow sensitive information, including IDs and passwords for mission-critical applications, to be readily available to any user on its internal network and to grant excessive access to individuals who do not need it. In addition, the Internal Revenue Service had systems running unsupported software that could not be patched against known vulnerabilities. Until those weaknesses are corrected, the Internal Revenue Service remains vulnerable to insider threats and is at increased risk of unauthorized access to and disclosure, modification, or destruction of financial and taxpayer information, as well as inadvertent or deliberate disruption of system operations and services. The IRS agreed to develop a plan addressing each of our recommendations. In 2008, we reported that although the Los Alamos National Laboratory— one of the nation’s weapons laboratories—implemented measures to enhance the information security of its unclassified network, vulnerabilities continued to exist in several critical areas, including (1) identifying and authenticating users of the network, (2) encrypting sensitive information, (3) monitoring and auditing compliance with security policies, (4) controlling and documenting changes to a computer system’s hardware and software, and (5) restricting physical access to computing resources. As a result, sensitive information on the network— including unclassified controlled nuclear information, naval nuclear propulsion information, export control information, and personally identifiable information—were exposed to an unnecessary risk of compromise. Moreover, the risk was heightened because about 300 (or 44 percent) of 688 foreign nationals who had access to the unclassified network as of May 2008 were from countries classified as sensitive by the Department of Energy, such as China, India, and Russia. While the organization did not specifically comment on our recommendations, it agreed with the conclusions. In 2008, we reported that the Tennessee Valley Authority had not fully implemented appropriate security practices to secure the control systems used to operate its critical infrastructures at facilities we reviewed. Multiple weaknesses within the Tennessee Valley Authority corporate network left it vulnerable to potential compromise of the confidentiality, integrity, and availability of network devices and the information transmitted by the network. For example, almost all of the workstations and servers that we examined on the corporate network lacked key security patches or had inadequate security settings. Furthermore, Tennessee Valley Authority had not adequately secured its control system networks and devices on these networks, leaving the control systems vulnerable to disruption by unauthorized individuals. In addition, we reported that the network interconnections provided opportunities for weaknesses on one network to potentially affect systems on other networks. Specifically, weaknesses in the separation of network segments could allow an individual who had gained access to a computing device connected to a less secure portion of the network to be able to compromise systems in a more secure portion of the network, such as the control systems. As a result, Tennessee Valley Authority’s control systems were at increased risk of unauthorized modification or disruption by both internal and external threats and could affect its ability to properly generate and deliver electricity. The Tennessee Valley Authority agreed with our recommendations and provided information on steps it was taking to implement them. In 2007, we reported that the Department of Homeland Security had significant weaknesses in computer security controls surrounding the information systems used to support its U.S. Visitor and Immigrant Status Technology (US-VISIT) program for border security. For example, it had not implemented controls to effectively prevent, limit, and detect access to computer networks, systems, and information. Specifically, it had not (1) adequately identified and authenticated users in systems supporting US-VISIT; (2) sufficiently limited access to US-VISIT information and information systems; (3) ensured that controls adequately protected external and internal network boundaries; (4) effectively implemented physical security at several locations; (5) consistently encrypted sensitive data traversing the communication network; and (6) provided adequate logging or user accountability for the mainframe, workstations, or servers. In addition, it had not always ensured that responsibilities for systems development and system production had been sufficiently segregated and had not consistently maintained secure configurations on the application servers and workstations at a key data center and ports of entry. As a result, increased risk existed that unauthorized individuals could read, copy, delete, add, and modify sensitive information—including personally identifiable information—and disrupt service on Customs and Border Protection systems supporting the US-VISIT program. The department stated that it directed Customs and Border Protection to complete remediation activities to address each of our recommendations. According to our reports and those of agency inspectors general, persistent weaknesses appear in the five major categories of information system controls: (1) access controls, which ensure that only authorized individuals can read, alter, or delete data; (2) configuration management controls, which provide assurance that only authorized software programs are implemented; (3) segregation of duties, which reduces the risk that one individual can independently perform inappropriate actions without detection; (4) continuity of operations planning, which provides for the prevention of significant disruptions of computer-dependent operations; and (5) an agencywide information security program, which provides the framework for ensuring that risks are understood and that effective controls are selected and properly implemented. Most agencies continue to have weaknesses in each of these categories, as shown in figure 4. Agencies use access controls to limit, prevent, or detect inappropriate access to computer resources (data, equipment, and facilities), thereby protecting them from unauthorized use, modification, disclosure, and loss. Such controls include both electronic and physical controls. Electronic access controls include those related to boundary protection, user identification and authentication, authorization, cryptography, and auditing and monitoring. Physical access controls are important for protecting computer facilities and resources from espionage, sabotage, damage, and theft. These controls involve restricting physical access to computer resources, usually by limiting access to the buildings and rooms in which they are housed and enforcing usage restrictions and implementation guidance for portable and mobile devices. At least 23 major federal agencies had access control weaknesses during fiscal year 2008. An analysis of our reports reveals that 48 percent of information security control weaknesses pertained to access controls (see fig. 5). For example, agencies did not consistently (1) establish sufficient boundary protection mechanisms; (2) identify and authenticate users to prevent unauthorized access; (3) enforce the principle of least privilege to ensure that authorized access was necessary and appropriate; (4) apply encryption to protect sensitive data on networks and portable devices; (5) log, audit, and monitor security-relevant events; and (6) establish effective controls to restrict physical access to information assets. Without adequate access controls in place, agencies cannot ensure that their information resources are protected from intentional or unintentional harm. Boundary protection controls logical connectivity into and out of networks and controls connectivity to and from network connected devices. Agencies segregate the parts of their networks that are publicly accessible by placing these components in subnetworks with separate physical interfaces and preventing public access to their internal networks. Unnecessary connectivity to an agency’s network increases not only the number of access paths that must be managed and the complexity of the task, but the risk of unauthorized access in a shared environment. In addition to deploying a series of security technologies at multiple layers, deploying diverse technologies at different layers helps to mitigate the risk of successful cyber attacks. For example, multiple firewalls can be deployed to prevent both outsiders and trusted insiders from gaining unauthorized access to systems, and intrusion detection technologies can be deployed to defend against attacks from the Internet. Agencies continue to demonstrate vulnerabilities in establishing appropriate boundary protections. For example, two agencies that we assessed did not adequately secure channels to connect remote users, increasing the risk that attackers will use these channels to gain access to restricted network resources. One of these agencies also did not have adequate intrusion detection capabilities, while the other allowed users of one network to connect to another, higher-security network. Such weaknesses in boundary protections impair an agency’s ability to deflect and detect attacks quickly and protect sensitive information and networks. A computer system must be able to identify and authenticate different users so that activities on the system can be linked to specific individuals. When an organization assigns unique user accounts to specific users, the system is able to distinguish one user from another—a process called identification. The system also must establish the validity of a user’s claimed identity by requesting some kind of information, such as a password, that is known only by the user—a process known as authentication. Agencies did not always adequately control user accounts and passwords to ensure that only valid users could access systems and information. In our 2007 FISMA report, we noted several weaknesses in agencies’ identification and authentication procedures. Agencies continue to experience similar weaknesses in fiscal years 2008 and 2009. For example, certain agencies did not adequately enforce strong password settings, increasing the likelihood that accounts could be compromised and used by unauthorized individuals to gain access to sensitive information. In other instances, agencies did not enforce periodic changing of passwords or use of one-time passwords or passcodes, and transmitted or stored passwords in clear text. Poor password management increases the risk that unauthorized users could guess or read valid passwords to devices and use the compromised devices for an indefinite period of time. Authorization is the process of granting or denying access rights and permissions to a protected resource, such as a network, a system, an application, a function, or a file. A key component of granting or denying access rights is the concept of least privilege, which is a basic principle for securing computer resources and information and means that users are granted only those access rights and permissions that they need to perform their official duties. To restrict legitimate users’ access to only those programs and files that they need to do their work, agencies establish access rights and permissions. “User rights” are allowable actions that can be assigned to users or to groups of users. File and directory permissions are rules that regulate which users can access a particular file or directory and the extent of that access. To avoid unintentionally authorizing users access to sensitive files and directories, an agency must give careful consideration to its assignment of rights and permissions. Agencies continued to grant rights and permissions that allowed more access than users needed to perform their jobs. Inspectors general at 12 agencies reported instances where users had been granted excessive privileges. In our reviews, we also noted vulnerabilities in this area. For example, at one agency, users could inappropriately escalate their access privileges to run commands on a powerful system account, many had unnecessary and inappropriate access to databases, and other accounts allowed excessive privileges and permissions. Another agency allowed (on financial applications) generic, shared accounts that included the ability to create, delete, and modify users’ accounts. Approximately 1,100 users at yet another agency had access to mainframe system management utilities, although such access was not necessarily required to perform their jobs. These utilities provided access to all files stored on disk; all programs running on the system, including the outputs; and the ability to alter hardware configurations supporting the production environment. We uncovered one agency that had provided a contractor with system access that was beyond what was needed, making the agency vulnerable to incidents on the contractor’s network. Another agency gave all users of an application full access to the application’s source code although their responsibilities did not require this level of privilege. Such weaknesses in authorization place agencies at increased risk of inappropriate access to data and sensitive system programs, as well as to the consequent disruption of services. Cryptography underlies many of the mechanisms used to enforce the confidentiality and integrity of critical and sensitive information. A basic element of cryptography is encryption. Encryption can be used to provide basic data confidentiality and integrity by transforming plain text into cipher text using a special value known as a key and a mathematical process known as an algorithm. The National Security Agency recommends disabling protocols that do not encrypt information transmitted across the network, such as user identification and password combinations. Agencies did not always encrypt sensitive information on their systems or traversing the network. In our reviews of agencies’ information security, we found that agencies did not always encrypt sensitive information. For example, five agencies that we reviewed did not effectively use cryptographic controls to protect sensitive resources. Specifically, one agency allowed unencrypted protocols to be used on its network devices. Another agency did not require encrypted passwords for network logins, while another did not consistently provide approved, secure transmission of data over its network. These weaknesses could allow an attacker, or malicious user, to view information and use that knowledge to obtain sensitive financial and system data being transmitted over the network. To establish individual accountability, monitor compliance with security policies, and investigate security violations, it is crucial to determine what, when, and by whom specific actions have been taken on a system. Agencies accomplish this by implementing system or security software that provides an audit trail, or logs of system activity, that they can use to determine the source of a transaction or attempted transaction and to monitor users’ activities. The way in which agencies configure system or security software determines the nature and extent of the information that can be provided by the audit trail. To be effective, agencies should configure their software to collect and maintain audit trails that are sufficient to track security-relevant events. Agencies did not sufficiently log and monitor key security- and audit- related events on their network. For example, agencies did not monitor critical portions of their networks for intrusions; record successful, unauthorized access attempts; log certain changes to data on a mainframe (which increases the risk of compromised security controls or disrupted operations); and capture all authentication methods and logins to a network by foreign nationals. Similarly, 14 agencies did not always have adequate auditing and monitoring capabilities. For example, one agency did not conduct a baseline assessment of an important network. This baseline determines a typical state or pattern of network activity. Without this information, the agency could have difficulty detecting and investigating anomalous activity to ascertain whether or not an attack was under way. Another agency did not perform source code scanning or have a process for manual source code reviews, which increases the risk that vulnerabilities would not be detected. As a result, unauthorized access could go undetected, and if a system is modified or disrupted, the ability to trace or recreate events could be impeded. Physical security controls help protect computer facilities and resources from espionage, sabotage, damage, and theft. These controls restrict physical access to sensitive computing and communications resources, usually by limiting access to the buildings and rooms in which the resources are housed. Examples of physical security controls include perimeter fencing, surveillance cameras, security guards, locks, and procedures for granting or denying individuals physical access to computing resources. Physical controls also include environmental controls such as smoke detectors, fire alarms, extinguishers, and uninterruptible power supplies. Considerations for perimeter security also include controlling vehicular and pedestrian traffic. In addition, visitors’ access to sensitive areas must be managed appropriately. Our analysis of inspector general, GAO, and agency reports has shown that nine agencies did not sufficiently restrict physical access to sensitive computing and communication resources. The physical security measures employed by these agencies often did not comply with their own requirements or with federal standards. Access to facilities containing sensitive equipment and information was not always adequately restricted. For example, at one agency with buildings housing classified networks, cars were not stopped and inspected; a sign indicated the building’s purpose; fencing was scalable; and access to buildings containing computer network equipment was not controlled by electronic or other means. Agencies did not adequately manage visitors, in one instance, placing network jacks in an area where unescorted individuals could use them to obtain electronic access to restricted computing resources, and in another failing to properly identify and control visitors at a facility containing sensitive equipment. Agencies did not always remove employees’ physical access authorizations to sensitive areas in a timely manner when they departed or their work no longer required such access. Environmental controls at one agency did not meet federal guidelines, with fire suppression capabilities, emergency lighting, and backup power all needing improvements. Such weaknesses in physical access controls increase the risk that sensitive computing resources will inadvertently or deliberately be misused, damaged, or destroyed. Configuration management controls ensure that only authorized and fully tested software is placed in operation. These controls, which also limit and monitor access to powerful programs and sensitive files associated with computer operations, are important in providing reasonable assurance that access controls are not compromised and that the system will not be impaired. These policies, procedures, and techniques help ensure that all programs and program modifications are properly authorized, tested, and approved. Further, patch management is an important element in mitigating the risks associated with software vulnerabilities. Up-to-date patch installation could help mitigate vulnerabilities associated with flaws in software code that could be exploited to cause significant damage— including the loss of control of entire systems—thereby enabling malicious individuals to read, modify, or delete sensitive information or disrupt operations. Twenty-one agencies demonstrated weaknesses in configuration management controls. For instance, several agencies did not implement common secure configuration policies across their systems, increasing the risk of avoidable security vulnerabilities. In addition, agencies did not effectively ensure that system software changes had been properly authorized, documented, and tested, which increases the risk that unapproved changes could occur without detection and that such changes could disrupt a system’s operations or compromise its integrity. Agencies did not always monitor system configurations to prevent extraneous services and other vulnerabilities from remaining undetected and jeopardizing operations. At least six agencies did not consistently update software on a timely basis to protect against known vulnerabilities or did not fully test patches before applying them. Without a consistent approach to updating, patching, and testing software, agencies are at increased risk of exposing critical and sensitive data to unauthorized and possibly undetected access. Segregation of duties refers to the policies, procedures, and organizational structure that helps ensure that one individual cannot independently control all key aspects of a process or computer-related operation and thereby conduct unauthorized actions or gain unauthorized access to assets or records. Proper segregation of duties is achieved by dividing responsibilities among two or more individuals or groups. Dividing duties among individuals or groups diminishes the likelihood that errors and wrongful acts will go undetected because the activities of one individual or group will serve as a check on the activities of the other. At least 14 agencies did not appropriately segregate information technology duties. These agencies generally did not assign employee duties and responsibilities in a manner that segregated incompatible functions among individuals or groups of individuals. For instance, at one agency, an individual who enters an applicant’s data into a financial system also had the ability to hire the applicant. At another agency, 76 system users had the ability to create and approve purchase orders. Without adequate segregation of duties, there is an increased risk that erroneous or fraudulent actions can occur, improper program changes can be implemented, and computer resources can be damaged or destroyed. An agency must take steps to ensure that it is adequately prepared to cope with the loss of operational capabilities due to an act of nature, fire, accident, sabotage, or any other disruption. An essential element in preparing for such a catastrophe is an up-to-date, detailed, and fully tested continuity of operations plan. Such a plan should cover all key computer operations and should include planning to ensure that critical information systems, operations, and data such as financial processing and related records can be properly restored if an emergency or a disaster occurs. To ensure that the plan is complete and fully understood by all key staff, it should be tested— including unannounced tests—and test plans and results documented to provide a basis for improvement. If continuity of operations controls are inadequate, even relatively minor interruptions could result in lost or incorrectly processed data, which could cause financial losses, expensive recovery efforts, and inaccurate or incomplete mission-critical information. Although agencies have reported increases in the number of systems for which contingency plans have been tested, at least 17 agencies had shortcomings in their continuity of operations plans. For example, one agency’s disaster recovery planning had not been completed. Specifically, disaster recovery plans for three components of the agency were in draft form and had not been tested. Another agency did not include a business impact analysis in the contingency plan control, which would assist in planning for system recovery. In another example, supporting documentation for some of the functional tests at the agency did not adequately support testing results for verifying readability of backup tapes retrieved during the tests. Until agencies complete actions to address these weaknesses, they are at risk of not being able to appropriately recover systems in a timely manner from certain service disruptions. An underlying cause for information security weaknesses identified at federal agencies is that they have not yet fully or effectively implemented agencywide information security programs. An agencywide security program, as required by FISMA, provides a framework and continuing cycle of activity for assessing and managing risk, developing and implementing security policies and procedures, promoting security awareness and training, monitoring the adequacy of the entity’s computer- related controls through security tests and evaluations, and implementing remedial actions as appropriate. Without a well-designed program, security controls may be inadequate; responsibilities may be unclear, misunderstood, and improperly implemented; and controls may be inconsistently applied. Such conditions may lead to insufficient protection of sensitive or critical resources. Twenty-three agencies had not fully or effectively implemented agencywide information security programs. Agencies often did not adequately design or effectively implement policies for elements key to an information security program. Weaknesses in agency information security program activities, such as risk assessments, information security policies and procedures, security planning, security training, system testing and evaluation, and remedial action plans are described next. In order for agencies to determine what security controls are needed to protect their information resources, they must first identify and assess their information security risks. Moreover, by increasing awareness of risks, these assessments can generate support for policies and controls. Agencies have not fully implemented their risk assessment processes. In addition, 14 major agencies had weaknesses in their risk assessments. Furthermore, they did not always properly assess the impact level of their systems or evaluate potential risks for the systems we reviewed. For example, one agency had not yet finalized and approved its guidance for completing risk assessments. In another example, the agency had not properly categorized the risk to its system, because it had performed a risk assessment without an inventory of interconnections to other systems. Similarly, another agency had not completed risk assessments for its critical systems and had not assigned impact levels. In another instance, an agency had current risk assessments that documented residual risk assessed and potential threats, and recommended corrective actions for reducing or eliminating the vulnerabilities they had identified. However, that agency had not identified many of the vulnerabilities we found and had not subsequently assessed the risks associated with them. As a result of these weaknesses, agencies may be implementing inadequate or inappropriate security controls that do not address the systems’ true risk, and potential risks to these systems may not be known. According to FISMA, each federal agency’s information security program must include policies and procedures that are based on risk assessments that cost-effectively reduce information security risks to an acceptable level and ensure that information security is addressed throughout the life cycle of each agency’s information system. The term ‘security policy’ refers to specific security rules set up by the senior management of an agency to create a computer security program, establish its goals, and assign responsibilities. Because policy is written at a broad level, agencies also develop standards, guidelines, and procedures that offer managers, users, and others a clear approach to implementing policy and meeting organizational goals. Thirteen agencies had weaknesses in their information security policies and procedures. For example, one agency did not have updated policies and procedures for configuring operating systems to ensure they provide the necessary detail for controlling and logging changes. Another agency had not established adequate policies or procedures to implement and maintain an effective departmentwide information security program or to address key OMB privacy requirements. Agencies also exhibited weaknesses in policies concerning security requirements for laptops, user access privileges, security incidents, certification and accreditation, and physical security. As a result, agencies have reduced assurance that their systems and the information they contain are sufficiently protected. Without policies and procedures that are based on risk assessments, agencies may not be able to cost-effectively reduce information security risks to an acceptable level and ensure that information security is addressed throughout the life cycle of each agency’s information system. FISMA requires each federal agency to develop plans for providing adequate information security for networks, facilities, and systems or groups of systems. According to NIST 800-18, system security planning is an important activity that supports the system development life cycle and should be updated as system events trigger the need for revision in order to accurately reflect the most current state of the system. The system security plan provides a summary of the security requirements for the information system and describes the security controls in place or planned for meeting those requirements. NIST guidance also indicates that all security plans should be reviewed and updated, if appropriate, at least annually. Further, appendix III of OMB Circular A-130 requires security plans to include controls for, among other things, contingency planning and system interconnections. System security plans were incomplete or out of date at several agencies. For example, one agency had an incomplete security plan for a key application. Another agency had only developed a system security plan that covered two of the six facilities we reviewed, and the plan was incomplete and not up-to-date. At another agency, 52 of the 57 interconnection security agreements listed in the security plan were not current since they had not been updated within 3 years. Without adequate security plans in place, agencies cannot be sure that they have the appropriate controls in place to protect key systems and critical information. Users of information resources can be one of the weakest links in an agency’s ability to secure its systems and networks. Therefore, an important component of an agency’s information security program is providing the required training so that users understand system security risks and their own role in implementing related policies and controls to mitigate those risks. Several agencies had not ensured that all information security employees and contractors, including those who have significant information security responsibilities, had received sufficient training. For example, users of one agency’s IT systems had not been trained to check for continued functioning of their encryption software after installation. At another agency, officials stated that several of its components had difficulty in identifying and tracking all employees who have significant IT security responsibilities and thus were unable to ensure that they received the specialized training necessary to effectively perform their responsibilities. Without adequate training, users may not understand system security risks and their own role in implementing related policies and controls to mitigate those risks. Another key element of an information security program is testing and evaluating system controls to ensure that they are appropriate, effective, and comply with policies. FISMA requires that agencies test and evaluate the information security controls of their major systems and that the frequency of such tests be based on risk, but occur no less than annually. NIST requires agencies to ensure that the appropriate officials are assigned roles and responsibilities for testing and evaluating controls over their systems. Agencies did not always implement policies and procedures for performing periodic testing and evaluation of their information security controls. For example, one agency had not adequately tested security controls. Specifically, the tests of a major application and the mainframe did not identify or discuss the vulnerabilities that we had identified during our audit. The same agency’s testing did not reveal problems with the mainframe that could allow unauthorized users to read, copy, change, delete, and modify data. In addition, although testing requirements were stated in test documentation, the breadth and depth of the test, as well as the results of the test, had not always been documented. Also, agencies reported inconsistent testing of security controls among components. Without conducting the appropriate tests and evaluations, agencies have limited assurance that policies and controls are appropriate and working as intended. Additionally, there is an increased risk that undetected vulnerabilities could be exploited to allow unauthorized access to sensitive information. Remedial Action Processes and Plans FISMA requires that agencies’ information security programs include a process for planning, implementing, evaluating, and documenting remedial actions to address any deficiencies in the information security policies, procedures, and practices of the agency. Since our 2007 FISMA report, we have continued to find weaknesses in agencies’ plans and processes for remedial actions. Agencies indicated that they had corrected or mitigated weaknesses; however, our work revealed that those weaknesses still existed. In addition, the inspectors general at 14 of the 24 agencies reported weaknesses in the plans to document remedial actions. For example, at several agencies, the inspector general reported that weaknesses had been identified but not documented in the remediation plans. Inspectors general further reported that agency plans did not include all relevant information in accordance with OMB instructions. We also found that deficiencies had not been corrected in a timely manner. Without a mature process and effective remediation plans, the risk increases that vulnerabilities in agencies’ systems will not be mitigated in an effective and timely manner. Until agencies effectively and fully implement agencywide information security programs, federal data and systems will not be adequately safeguarded to prevent disruption, unauthorized use, disclosure, and modification. Further, until agencies implement our recommendations to correct specific information security control weaknesses, their systems and information will remain at increased risk of attack or compromise. In prior reports, we and inspectors general have made hundreds of recommendations to agencies for actions necessary to resolve prior significant control deficiencies and information security program shortfalls. For example, we recommended that agencies correct specific information security deficiencies related to user identification and authentication, authorization, boundary protections, cryptography, audit and monitoring, physical security, configuration management, segregation of duties, and continuity of operations planning. We have also recommended that agencies fully implement comprehensive, agencywide information security programs by correcting weaknesses in risk assessments, information security policies and procedures, security planning, security training, system tests and evaluations, and remedial actions. The effective implementation of these recommendations will strengthen the security posture at these agencies. Agencies have implemented or are in the process of implementing many of our recommendations. In March 2009, we reported on 12 key improvements suggested by a panel of experts as being essential to improving our national cyber security posture (see app. III). The expert panel included former federal officials, academics, and private-sector executives. Their suggested improvements are intended to address many of the information security vulnerabilities facing both private and public organizations, including federal agencies. Among these improvements are recommendations to develop a national strategy that clearly articulates strategic objectives, goals, and priorities and to establish a governance structure for strategy implementation. Due to increasing cyber security threats, the federal government has initiated several efforts to protect federal information and information systems. Recognizing the need for common solutions to improving security, the White House, OMB, and federal agencies have launched or continued several governmentwide initiatives that are intended to enhance information security at federal agencies. These key initiatives are discussed here. 60-day cyber review: The National Security Council and Homeland Security Council recently completed a 60-day interagency review intended to develop a strategic framework to ensure that federal cyber security initiatives are appropriately integrated, resourced, and coordinated with Congress and the private sector. The resulting report recommended, among other things, appointing an official in the White House to coordinate the nation’s cybersecurity policies and activities, creating a new national cybersecurity strategy, and developing a framework for cyber research and development. Comprehensive National Cybersecurity Initiative: In January 2008, President Bush began to implement a series of initiatives aimed primarily at improving the Department of Homeland Security and other federal agencies’ efforts to protect against intrusion attempts and anticipate future threats. While these initiatives have not been made public, the Director of National Intelligence stated that they include defensive, offensive, research and development, and counterintelligence efforts, as well as a project to improve public/private partnerships. The Information Systems Security Line of Business: The goal of this initiative, led by OMB, is to improve the level of information systems security across government agencies and reduce costs by sharing common processes and functions for managing information systems security. Several agencies have been designated as service providers for IT security awareness training and FISMA reporting. Federal Desktop Core Configuration: For this initiative, OMB directed agencies that have Windows XP deployed and plan to upgrade to Windows Vista operating systems to adopt the security configurations developed by the National Institute of Standards and Technology, Department of Defense, and Department of Homeland Security. The goal of this initiative is to improve information security and reduce overall IT operating costs. SmartBUY: This program, led by the General Services Administration, is to support enterprise-level software management through the aggregate buying of commercial software governmentwide in an effort to achieve cost savings through volume discounts. The SmartBUY initiative was expanded to include commercial off-the-shelf encryption software and to permit all federal agencies to participate in the program. The initiative is to also include licenses for information assurance. Trusted Internet Connections Initiative: This effort, directed by OMB and led by the Department of Homeland Security, is designed to optimize individual agency network services into a common solution for the federal government. The initiative is to facilitate the reduction of external connections, including Internet points of presence, to a target of 50. We currently have ongoing work that addresses the status, planning, and implementation efforts of several of these initiatives. Federal agencies reported increased compliance in implementing key information security control activities for fiscal year 2008; however, inspectors general at several agencies noted shortcomings with agencies’ implementation of information security requirements. OMB also reported that agencies’ were increasingly performing key activities. Specifically, agencies reported increases in the number and percentage of systems that had been certified and accredited, the number and percentage of employees and contractors receiving security awareness training, and the number and percentage of systems with tested contingency plans. However, the number and percentage of systems that had been tested and evaluated at least annually decreased slightly and the number and percentage of employees who had significant security responsibilities and had received specialized training decreased significantly (see fig. 6). Consistent with previous years, inspectors general continued to identify weaknesses with the processes and practices agencies have in place to implement FISMA requirements. Although OMB took steps to clarify its reporting instructions to agencies for preparing fiscal year 2008 reports, the instructions did not request inspectors general to report on agencies’ effectiveness of key activities and did not always provide clear guidance to inspectors general. Federal agencies rely on their employees to protect the confidentiality, integrity, and availability of the information in their systems. It is critical for system users to understand their security roles and responsibilities and to be adequately trained to perform them. FISMA requires agencies to provide security awareness training to personnel, including contractors and other users of information systems that support agency operations and assets. This training should explain information security risks associated with their activities and their responsibilities in complying with agency policies and procedures designed to reduce these risks. In addition, agencies are required to provide appropriate training on information security to personnel who have significant security responsibilities. Agencies reported a slight increase in the percentage of employees and contractors who received security awareness training. According to agency reports, 89 percent of total employees and contractors had received security awareness training in 2008 compared to 84 percent of employees and contractors in 2007. While this change marks an improvement between fiscal years 2007 and 2008, the percentage of employees and contractors receiving security awareness training is still below the 91 percent reported for 2006. In addition, seven inspectors general reported disagreement with the percentage of employees and contractors receiving security awareness training reported by their agencies. Additionally, several inspectors general reported specific weaknesses related to security awareness training at their agencies; for example, one inspector general reported that the agency lacked the ability to document and track which system users had received awareness training, while another inspector general reported that training did not cover the recommended topics. Governmentwide, agencies reported a lower percentage of employees who had significant security responsibilities who had received specialized training. In fiscal year 2008, 76 percent of these employees had received specialized training compared with 90 percent of these employees in fiscal year 2007. Although the governmentwide percentage decreased, the majority of the 24 agencies reported increasing or unchanging percentages of employees receiving specialized training; 8 of the 24 agencies reported percentage decreases (see fig. 7). At least 12 inspectors general reported weaknesses related to specialized security training. One of the inspectors general reported that some groups did not have a training program for personnel who have critical IT responsibilities and another inspector general reported that the agency was unable to effectively track contractors who needed specialized training. Decreases in the number of individuals receiving specialized training at some federal agencies combined with continuing deficiencies in training programs could limit the ability of agencies to implement security measures effectively. Providing for the confidentiality, integrity, and availability of information in today’s highly networked environment is not an easy or trivial task. The task is made that much more difficult if each person who owns, uses, relies on, or manages information and information systems does not know or is not properly trained to carry out his or her specific responsibilities. An increasing number of inspectors general reported conducting annual independent evaluations in accordance with professional standards and provided additional information about the effectiveness of their agency’s security programs. FISMA requires agency inspectors general or their independent external auditors to perform an independent evaluation of the information security programs and practices of the agency to determine the effectiveness of the programs and practices. We have previously reported that the annual inspector general independent evaluations lacked a common approach and that the scope and methodology of the evaluations varied across agencies. We noted that there was an opportunity to improve these evaluations by conducting them in accordance with audit standards or a common approach and framework. In fiscal year 2008, 16 of 24 inspectors general cited using professional standards to perform the annual FISMA evaluations, up from 8 inspectors general who cited using standards the previous year. Of the 16 inspectors general, 13 reported performing evaluations that were in accordance with generally accepted government auditing standards, while the other 3 indicated using the “Quality Standards for Inspections” issued by the President’s Council on Integrity and Efficiency. The remaining eight inspectors general cited using internally developed standards or did not indicate whether they had performed their evaluations in accordance with professional standards. In addition, an increasing number of inspectors general provided supplemental information about their agency’s information security policies and practices. To illustrate, 21 of 24 inspectors general reported additional information about the effectiveness of their agency’s security controls and programs that was above and beyond what was requested in the OMB template, an increase from the 18 who had provided such additional information in their fiscal year 2007 reports. The additional information included descriptions of significant control deficiencies and weaknesses in security processes that provided additional context to the agency’s security posture. Although inspectors general reported using professional standards more frequently, their annual independent evaluations occasionally lacked consistency. For example, Three inspectors general provided only template responses and did not identify the scope and methodology of their evaluation. (These three inspectors general were also among those who had not reported performing their evaluation in accordance with professional standards.) Descriptions of the controls evaluated during the review as documented in the scope and methodology sections differed. For example, according to their FISMA reports, a number of inspectors general stated that their evaluations included a review of policies and procedures, whereas others did not indicate whether policies and procedures had been reviewed. Additionally, multiple inspectors general also indicated that technical vulnerability assessments had been conducted as part of the review, whereas others did not indicate whether such an assessment had been part of the review. Eleven inspectors general indicated that their FISMA evaluations considered the results of previous information security reviews, whereas 13 inspectors general did not indicate whether they considered other information security work, if any. The development and use of a common framework or adherence to auditing standards could provide improved effectiveness, increased efficiency, quality control, and consistency in inspector general assessments. Although OMB has supported several governmentwide initiatives and provided additional guidance to help improve information security at agencies, opportunities remain for it to improve its annual reporting and oversight of agency information security programs. FISMA specifies that OMB, among other responsibilities, is to develop policies, principles, standards, and guidelines on information security and report to Congress not later than March 1 of each year on agencies’ implementation of FISMA. Each year, OMB provides instructions to federal agencies and their inspectors general for preparing their FISMA reports and then summarizes the information provided by the agencies and the inspectors general in its report to Congress. Over the past 4 years, we have reported that, while the periodic reporting of performance measures for FISMA requirements and related analysis provides valuable information on the status and progress of agency efforts to implement effective security management programs, shortcomings in OMB’s reporting instructions limited the utility of the annual reports. Accordingly, we recommended that OMB improve reporting by clarifying reporting instructions; develop additional metrics that measure control effectiveness; request inspectors general to assess the quality of additional information security processes such as system test and evaluation, risk categorization, security awareness training, and incident reporting; and require agencies to report on additional key security activities such as patch management. Although OMB has taken some actions to enhance its reporting instructions, it has not implemented most of the recommendations, and thus further actions need to be taken to fully address them. In addition to the previously reported shortcomings, OMB’s reporting instructions for fiscal year 2008 did not sufficiently address several processes key to implementing an agencywide security program and were sometimes unclear. For example, the reporting instructions did not request inspectors general to provide information on the quality or effectiveness of agencies’ processes for developing and maintaining inventories, providing specialized security training, and monitoring contractors. For these activities, inspectors general were requested to report only on the extent to which agencies had implemented the activity but not on the effectiveness of those activities. Providing information on the effectiveness of the processes used to implement the activities could further enhance the usefulness of the data for management and oversight purposes. OMB’s guidance to inspectors general for rating agencies’ certification and accreditation processes was not clear. In its reporting instructions, OMB requests inspectors general to rate their agency’s certification and accreditation process using the terms “excellent,” “good,” “satisfactory,” “poor,” or “failing.” However, the reporting instructions do not define or identify criteria for determining the level of performance for each rating. OMB also requests inspectors general to identify the aspect(s) of the certification and accreditation process they included or considered in rating the quality of their agency’s process. Examples OMB included were security plan, system impact level, system test and evaluation, security control testing, incident handling, security awareness training, and security configurations (including patch management). While this information is helpful and provides insight on the scope of the rating, inspectors general were not requested to comment on the quality or effectiveness of these items. Additionally, not all inspectors general considered the same aspects in reviewing the certification and accreditation process, yet all were allowed to provide the same rating. Without clear guidelines for rating these processes, OMB and Congress may not have a consistent basis for comparing the progress of an agency over time or against other agencies. In its report to Congress for fiscal year 2008, OMB did not fully summarize the findings from the inspectors general independent evaluations or identify significant deficiencies in agencies’ information security practices. FISMA requires OMB to provide a summary of the findings of agencies’ independent evaluations and significant deficiencies in agencies’ information security practices. Inspectors general often document their findings and significant information security control deficiencies in reports that support their evaluations. However, OMB did not summarize and present this information in its annual report to Congress. Most of the inspectors general information summarized in the annual report was taken from the “yes” or “no” responses or from questions having a predetermined range of percentages as stipulated by OMB’s reporting template. Thus, important information about the implementation of agency information security programs and the vulnerabilities and risks associated with federal information systems was not provided to Congress in OMB’s annual report. This information could be useful in determining whether agencies are effectively implementing information security policies, procedures, and practices. As a result, Congress may not be fully informed about the state of federal information security. OMB also did not approve or disapprove agencies’ information security programs. FISMA requires OMB to review agencies’ information security programs at least annually and approve or disapprove them. OMB representatives informed us that they review agencies’ FISMA reports and interact with agencies whenever an issue arises that requires their oversight. However, representatives stated that they do not explicitly or publicly declare that an agency’s information security program has been approved or disapproved. As a result, a mechanism for establishing accountability and holding agencies accountable for implementing effective programs was not used. Weaknesses in information security controls continue to threaten the confidentiality, integrity, and availability of the sensitive data maintained by federal agencies. These weaknesses, including those for access controls, configuration management, and segregation of duties, leave federal agency systems and information vulnerable to external as well as internal threats. The White House, OMB, and federal agencies have initiated actions intended to enhance information security at federal agencies. However, until agencies fully and effectively implement information security programs and address the hundreds of recommendations that we and agency inspectors general have made, federal systems will remain at an increased and unnecessary risk of attack or compromise. Despite these weaknesses, federal agencies have continued to report progress in implementing key information security requirements. While NIST, inspectors general, and OMB have all made progress toward fulfilling their statutory requirements, the current reporting process does not produce information to accurately gauge the effectiveness of federal information security activities. OMB’s annual reporting instructions did not cover key security activities and were not always clear. Finally, OMB did not include key information about findings and significant deficiencies identified by inspectors general in its governmentwide report to Congress and did not approve or disapprove agency information security programs. Shortcomings in reporting and oversight can result in insufficient information being provided to Congress and diminish its ability to monitor and assist federal agencies in improving the state of federal information security. We recommend that the Director of the Office of Management and Budget take the following four actions: Update annual reporting instructions to request inspectors general to report on the effectiveness of agencies’ processes for developing inventories, monitoring contractor operations, and providing specialized security training. Clarify and enhance reporting instructions to inspectors general for certification and accreditation evaluations by providing them with guidance on the requirements for each rating category. Include in OMB’s report to Congress, a summary of the findings from the annual independent evaluations and significant deficiencies in information security practices. Approve or disapprove agency information security programs after review. In written comments on a draft of this report, the Federal Chief Information Officer (CIO) generally agreed with our overall assessment of information security at the agencies. He also identified actions that OMB is taking to clarify its reporting guidance and to consider more effective security performance metrics. These actions are consistent with the intent of two of our recommendations, that OMB clarify and enhance reporting instructions and request inspectors general to report on additional measures of effectiveness. The Federal CIO did not address our recommendation to include a summary of the findings and significant security deficiencies in its report to Congress and did not concur with GAO’s conclusion that OMB does not approve or disapprove agencies’ information security management programs on an annual basis. He indicated that OMB reviews all agency and IG FISMA reports annually; reviews quarterly information on the major agencies’ security programs; and uses this information, and other reporting, to evaluate agencies security programs. The Federal CIO advised that concerns are communicated directly to the agencies. We acknowledge that these are important oversight activities. However, as we reported, OMB did not demonstrate that it approved or disapproved agency information security programs, as required by FISMA. Consequently, a mechanism for holding agencies accountable for implementing effective programs is not being effectively used. We are sending copies of this report to the Office of Management and Budget and other interested parties. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions regarding this report, please contact me at (202) 512-6244 or by e-mail at [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. In accordance with the Federal Information Security Management Act of 2002 (FISMA) requirement that the Comptroller General report periodically to Congress, our objectives were to evaluate (1) the adequacy and effectiveness of agencies’ information security policies and practices and (2) federal agency implementation of FISMA requirements. To assess the adequacy and effectiveness of agency information security policies and practices, we analyzed our related reports issued from May 2007 through April 2009. We also reviewed and analyzed the information security work and products of agency inspectors general. Further, we reviewed and summarized weaknesses identified in our reports and that of inspectors general using five major categories of information security controls: (1) access controls, (2) configuration management controls, (3) segregation of duties, (4) continuity of operations planning, and (5) agencywide information security programs. Our reports generally used the methodology contained in the Federal Information System Controls Audit Manual. We also examined information provided by the U.S. Computer Emergency Readiness Team (US-CERT) on reported security incidents. To assess the implementation of FISMA requirements, we reviewed and analyzed the provisions of the act and the mandated annual FISMA reports from the Office of Management and Budget (OMB), the National Institute of Standards and Technology (NIST), and the CIOs and IGs of 24 major federal agencies for fiscal years 2007 and 2008. We also examined OMB’s FISMA reporting instructions and other OMB and NIST guidance. We also held discussions with OMB representatives and agency officials from the National Institute of Standards and Technology and the Department of Homeland Security’s US-CERT to further assess the implementation of FISMA requirements. We did not verify the accuracy of the agencies’ responses; however, we reviewed supporting documentation that agencies provided to corroborate information provided in their responses. We did not include systems categorized as national security systems in our review, nor did we review the adequacy or effectiveness of the security policies and practices for those systems. We conducted this performance audit from December 2008 to May 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In March 2009, we convened a panel of experts to discuss how to improve key aspects of the national cyber security strategy and its implementation as well as other critical aspects of the strategy, including areas for improvement. The experts, who included former federal officials, academics, and private-sector executives, highlighted 12 key improvements that are, in their view, essential to improving the strategy and our national cyber security posture. These improvements are in large part consistent with our previously mentioned reports and extensive research and experience in this area. In addition to the individual named above, Charles Vrabel (Assistant Director); Debra Conner; Larry Crosland; Sharhonda Deloach; Neil Doherty; Kristi Dorsey; Rosanna Guererro; Nancy Glover; Rebecca Eyler; Mary Marshall; and Jayne Wilson made key contributions to this report. Cybersecurity: Continued Federal Efforts Are Needed to Protect Critical Systems and Information. GAO-09-835T. Washington, D.C.: June 25, 2009. Privacy and Security: Food and Drug Administration Faces Challenges in Establishing Protections for Its Postmarket Risk Analysis System. GAO-09-355. Washington, D.C.: June 1, 2009. Aviation Security: TSA Has Completed Key Activities Associated with Implementing Secure Flight, but Additional Actions Are Needed to Mitigate Risks. GAO-09-292. Washington, D.C.: May 13, 2009. Information Security: Cyber Threats and Vulnerabilities Place Federal Systems at Risk. GAO-09-661T. Washington, D.C.: May 5, 2009. Freedom of Information Act: DHS Has Taken Steps to Enhance Its Program, but Opportunities Exist to Improve Efficiency and Cost- Effectiveness. GAO-09-260. Washington, D.C.: March 20, 2009. Information Security: Securities and Exchange Commission Needs to Consistently Implement Effective Controls. GAO-09-203. Washington, D.C.: March 16, 2009. National Cyber Security Strategy: Key Improvements Are Needed to Strengthen the Nation’s Posture. GAO-09-432T. Washington, D.C.: March 10, 2009. Information Security: Further Actions Needed to Address Risks to Bank Secrecy Act Data. GAO-09-195. Washington, D.C.: January 30, 2009. Information Security: Continued Efforts Needed to Address Significant Weaknesses at IRS. GAO-09-136. Washington, D.C.: January 9, 2009. Nuclear Security: Los Alamos National Laboratory Faces Challenges in Sustaining Physical and Cyber Security Improvements. GAO-08-1180T. Washington, D.C.: September 25, 2008. Critical Infrastructure Protection: DHS Needs to Better Address Its Cyber Security Responsibilities. GAO-08-1157T. Washington, D.C.: September 16, 2008. Critical Infrastructure Protection: DHS Needs to Fully Address Lessons Learned from Its First Cyber Storm Exercise. GAO-08-825. Washington, D.C.: September 9, 2008. Information Security: Actions Needed to Better Protect Los Alamos National Laboratory’s Unclassified Computer Network. GAO-08-1001. Washington, D.C.: September 9, 2008. Cyber Analysis and Warning: DHS Faces Challenges in Establishing a Comprehensive National Capability. GAO-08-588. Washington, D.C.: July 31, 2008. Information Security: Federal Agency Efforts to Encrypt Sensitive Information Are Under Way, but Work Remains. GAO-08-525. Washington, D.C.: June 27, 2008. Information Security: FDIC Sustains Progress but Needs to Improve Configuration Management of Key Financial Systems. GAO-08-564. Washington, D.C.: May 30, 2008. Information Security: TVA Needs to Address Weaknesses in Control Systems and Networks. GAO-08-526. Washington, D.C.: May 21, 2008. Information Security: TVA Needs to Enhance Security of Critical Infrastructure Control Systems and Networks. GAO-08-775T. Washington, D.C.: May 21, 2008. Information Security: Progress Reported, but Weaknesses at Federal Agencies Persist. GAO-08-571T. Washington, D.C.: March 12, 2008. Information Security: Securities and Exchange Commission Needs to Continue to Improve Its Program. GAO-08-280. Washington, D.C.: February 29, 2008. Information Security: Although Progress Reported, Federal Agencies Need to Resolve Significant Deficiencies. GAO-08-496T. Washington, D.C.: February 14, 2008. Information Security: Protecting Personally Identifiable Information. GAO-08-343. Washington, D.C.: January 25, 2008. Information Security: IRS Needs to Address Pervasive Weaknesses. GAO-08-211. Washington, D.C.: January 8, 2008. Veterans Affairs: Sustained Management Commitment and Oversight Are Essential to Completing Information Technology Realignment and Strengthening Information Security. GAO-07-1264T. Washington, D.C.: September 26, 2007. Critical Infrastructure Protection: Multiple Efforts to Secure Control Systems Are Under Way, but Challenges Remain. GAO-07-1036. Washington, D.C.: September 10, 2007. Information Security: Sustained Management Commitment and Oversight Are Vital to Resolving Long-standing Weaknesses at the Department of Veterans Affairs. GAO-07-1019. Washington, D.C.: September 7, 2007. Information Security: Selected Departments Need to Address Challenges in Implementing Statutory Requirements. GAO-07-528. Washington, D.C.: August 31, 2007. Information Security: Despite Reported Progress, Federal Agencies Need to Address Persistent Weaknesses. GAO-07-837. Washington, D.C.: July 27, 2007. Information Security: Homeland Security Needs to Immediately Address Significant Weaknesses in Systems Supporting the US-VISIT Program. GAO-07-870. Washington, D.C.: July 13, 2007. Information Security: Homeland Security Needs to Enhance Effectiveness of Its Program. GAO-07-1003T. Washington, D.C.: June 20, 2007. Information Security: Agencies Report Progress, but Sensitive Data Remain at Risk. GAO-07-935T. Washington, D.C.: June 7, 2007. Information Security: Federal Deposit Insurance Corporation Needs to Sustain Progress Improving Its Program. GAO-07-351. Washington, D.C.: May 18, 2007.
For many years, GAO has reported that weaknesses in information security are a widespread problem that can have serious consequences--such as intrusions by malicious users, compromised networks, and the theft of intellectual property and personally identifiable information--and has identified information security as a governmentwide high-risk issue since 1997. Concerned by reports of significant vulnerabilities in federal computer systems, Congress passed the Federal Information Security Management Act of 2002 (FISMA), which authorized and strengthened information security program, evaluation, and reporting requirements for federal agencies. In accordance with the FISMA requirement that the Comptroller General report periodically to Congress, GAO's objectives were to evaluate (1) the adequacy and effectiveness of agencies' information security policies and practices and (2) federal agencies' implementation of FISMA requirements. To address these objectives, GAO analyzed agency, inspectors general, Office of Management and Budget (OMB), and GAO reports. Persistent weaknesses in information security policies and practices continue to threaten the confidentiality, integrity, and availability of critical information and information systems used to support the operations, assets, and personnel of most federal agencies. Recently reported incidents at federal agencies have placed sensitive data at risk, including the theft, loss, or improper disclosure of personally identifiable information of Americans, thereby exposing them to loss of privacy and identity theft. For fiscal year 2008, almost all 24 major federal agencies had weaknesses in information security controls. An underlying reason for these weaknesses is that agencies have not fully implemented their information security programs. As a result, agencies have limited assurance that controls are in place and operating as intended to protect their information resources, thereby leaving them vulnerable to attack or compromise. In prior reports, GAO has made hundreds of recommendations to agencies for actions necessary to resolve prior significant control deficiencies and information security program shortfalls. Federal agencies reported increased compliance in implementing key information security control activities for fiscal year 2008; however, inspectors general at several agencies noted shortcomings with agencies' implementation of information security requirements. Agencies reported increased implementation of control activities, such as providing awareness training for employees and testing system contingency plans. However, agencies reported decreased levels of testing security controls and training for employees who have significant security responsibilities. In addition, inspectors general at several agencies disagreed with performance reported by their agencies and identified weaknesses in the processes used to implement these activities. Further, although OMB took steps to clarify its reporting instructions to agencies for preparing fiscal year 2008 reports, the instructions did not request inspectors general to report on agencies' effectiveness of key activities and did not always provide clear guidance to inspectors general. As a result, the reporting may not adequately reflect agencies' implementation of the required information security policies and procedures.
Since the Social Security Act became law in 1935, workers have had the right to review their earnings records on file at SSA to ensure that they are correct. In 1988, SSA introduced the PEBES to better enable workers who requested such information to review their earnings records and obtain benefit estimates. According to SSA, less than 2 percent of workers who pay Social Security taxes request these statements each year. plans to have mailed statements automatically to more than 70 million workers. By providing these statements, SSA’s goals are to (1) better inform the public of benefits available under SSA’s programs, (2) assist workers in planning for their financial future, and (3) better ensure that Social Security earnings records are complete and accurate. Correcting earnings records benefits both SSA and the public because early identification and correction of errors in earnings records can reduce the time and cost required to correct them years later when an individual files for retirement benefits. Issuing the PEBES is a significant initiative for SSA. The projected cost of more than $80 million in fiscal year 2000 includes $56 million for production costs, such as printing and mailing the statement, and $24 million for personnel costs. SSA estimates that 608 staff-years will be required to handle the PEBES workload in fiscal year 2000: SSA staff are needed to prepare the statements, investigate discrepancies in workers’ earnings records, and respond to public inquiries. Since the PEBES was first developed, SSA has conducted several small-scale and national surveys to assess the general public’s reaction to receiving an unsolicited PEBES. In addition, SSA has conducted a series of focus groups to elicit the public’s and SSA employees’ opinion of the statement and what parts of it they did and did not understand. retirement at age 70. When SSA learned that many people were interested in the effect of early retirement on their benefits, SSA added an estimate for retirement at age 62. Overall public reaction to receiving an unsolicited PEBES has been consistently favorable. In a nationally representative survey conducted during a 1994 pilot test, the majority of respondents indicated they were glad to receive their statements. In addition, 95 percent of the respondents said the information provided was helpful to their families. Overall, older individuals reacted more favorably to receiving a PEBES than did younger individuals. In addition, SSA representatives who answer the toll-free telephone calls from the public have stated that most callers are pleased that they received a PEBES and say that the information is useful for financial planning. Although SSA has taken steps to improve the PEBES, we found that the current statement still provides too much information, which may overwhelm the reader, and presents the information in a way that undermines its usefulness. These weaknesses are attributable, in part, to the process SSA used to develop the PEBES. Additional information and expanded explanations have made the statement longer, but some explanations still confuse readers. Moreover, SSA has not tested for reader comprehension and has not collected detailed information from its front-line workers on the public’s response to the PEBES. explanations to understand complex information, the explanations should appear with the information. Easy-to-understand explanations: Readers need explanations of complex programs and benefits in the simplest and most straightforward language possible. In the 1996 PEBES, the message from the Commissioner of Social Security does not clearly explain why SSA is providing the statement. Although the message does include information on the statement’s contents and the need for individuals to review the earnings recorded by SSA, its presentation is uninviting, according to the design expert we consulted. More specifically, the type is too dense; the lines are too long; white space is lacking; and the key points are not highlighted. If the PEBES’ recipients do not read the Commissioner’s message, they may not understand why reviewing the statement is important. The message also attempts to reassure people that the Social Security program will be there when they need it with the following reference (from the 1996 PEBES) to the system’s solvency: The Social Security Board of Trustees projects that the system will continue to have adequate resources to pay benefits in full for more than 30 years. This means that there is time for the Congress to make changes needed to safeguard the program’s financial future. I am confident these actions will result in the continuation of the American public’s widespread support for Social Security. Some participants in SSA focus groups, however, thought the message suggested that the resources would not necessarily be there after 30 years. For example, one participant in a 1994 focus group reviewing a similar Commissioner’s message said, “. . . first thing I think about when I read the message is, is not going to be there for me.” current statement, some focus group participants and benefit experts suggested that SSA add an index or a table of contents to help readers navigate the statement. SSA has not used the best layout and design to help the reader identify the most important points and move easily from one section to the next. The organization of the statement is not clear at a glance. Readers cannot immediately grasp what the sections of the statement are, and in which order they should read them, according to the design expert with whom we consulted. The statement lacks effective use of features such as bulleting and highlighting that would make it more user friendly. In addition, the PEBES is disorganized: information does not appear where needed. The statement has a patchwork of explanations scattered throughout, causing readers to flip repeatedly from one page to another to find needed information. For example, page two begins by referring the reader to page four, and page three contains six references to information on other pages. Furthermore, to understand how the benefit estimates were developed and any limitations to these estimates, a PEBES recipient must read explanations spread over five pages. The statement’s spreading of benefit estimate explanations over several pages may result in individuals missing important information. This is especially true for people whose benefits are affected by special circumstances, which SSA does not take into consideration in developing PEBES benefit estimates. For example, the PEBES estimate is overstated for federal workers who are eligible for both the Civil Service Retirement System and Social Security benefits. For these workers, the law requires a reduction in their Social Security retirement or disability benefits according to a specific formula. In 1996, this reduction may be as much as $219 per month; however, PEBES’ benefit estimates do not reflect this reduction. The benefit estimate appears on page three; the explanation of the possible reduction does not appear until the bottom of page five. Without fully reviewing this additional information, a reader may not realize that the PEBES benefit estimate could be overstated. Because PEBES addresses complex programs and issues, explaining these points in simple, straightforward language is challenging. Although SSA made changes to improve the explanation of work credits, for example, many people still do not understand what these credits are, the relevance of the credits to their benefits, and how they are accumulated. The public also frequently asks questions about the PEBES’ explanation of family benefits. Family benefits are difficult to calculate and explain because the amount depends on several different factors, such as the age of the spouse and the spouse’s eligibility for benefits on his or her own work record. Informing the public about family benefits, however, is especially important: a 1995 SSA survey revealed that as much as 40 percent of the public is not aware of these benefits. A team of representatives from a cross section of SSA offices governed SSA’s decisions on the PEBES’ development, testing, and implementation. The team revised and expanded the statement in response to feedback on individual problems. The design expert we consulted observed that the current statement “appears to have been the result of too many authors, without a designated person to review the entire piece from the eyes of the readers. It seems to have developed over time, piecemeal . . . .” information collected does not provide sufficient detail for SSA to understand the problems people are having with the PEBES. Although the public and benefit experts agree that the current statement contains too much information, neither a standard benefit statement model exists in the public or private sector nor does a clear consensus on how best to present benefit information. The Canadian government chose to use a two-part document when it began sending out unsolicited benefit statements in 1985. The Canada Pension Plan’s one-page statement provides specific individual information, including the earnings record and benefit estimates. A separate brochure details the program explanations. The first time the Plan mails the statement, it sends both the one-page individual information and the detailed brochure; subsequent mailings contain only the single page with the individual information. Although some focus group participants and benefit experts prefer a two-part format, others believe that all information should remain in a single document, fearing that statement recipients will lose or might not read the separate explanations. SSA has twice tested the public’s reaction to receiving two separate documents. On the basis of a 1987 focus group test, SSA concluded that it needed to either redesign the explanatory brochure or incorporate the information into one document. SSA chose the latter approach. In a 1994 test, people indicated that they preferred receiving one document; however, the single document SSA used in the test had less information and a more readable format than the current PEBES. SSA, through the Government Printing Office, has awarded a 2-year contract for printing the fiscal years 1997 and 1998 statements. These statements will have the same format as the current PEBES with only a few wording changes. SSA is planning a more extensive redesign of the PEBES for the fiscal year 1999 mailings but only if it will save money on printing costs. By focusing on reduced printing costs as the main reason for redesigning the PEBES, SSA is overlooking the hidden costs of the statement’s existing weaknesses. For example, if people do not understand why they got the statement or have questions about information provided in the statement, they may call or visit SSA, creating more work for SSA staff. Furthermore, if the PEBES frustrates or confuses people, it could undermine public confidence in SSA and its programs. Our work suggests, and experts agree, that the PEBES’ value could be enhanced by several changes. Yet SSA’s redesign team is focusing on reducing printing costs without considering all of the factors that would ensure that PEBES is a cost-effective document. The PEBES initiative is an important step in better informing the public about SSA’s programs and benefits. To improve the statement, SSA can quickly make some basic changes. For example, SSA officials told us that, on the basis of our findings, they have revised the Commissioner’s message for the 1997 PEBES to make it shorter and less complex. More extensive revisions are needed, however, to ensure that the statement communicates effectively. SSA will need to start now to complete these changes before its 1999 redesign target date. The changes include improving the layout and design and simplifying certain explanations. These revisions will require time to collect data and to develop and test alternatives. SSA can help ensure that the changes target the most significant weaknesses by systematically obtaining more detailed feedback from front-line workers. SSA could also ensure that the changes clarify the statement by conducting formal comprehension tests with a sample of future PEBES recipients. In addition, we believe SSA should evaluate alternative formats for communicating the information presented in PEBES. For example, SSA could present the Commissioner’s message in a separate cover letter accompanying the statement, or SSA could consider a two-part option, similar to the approach of the Canada Pension Plan. To select the most cost-effective option, SSA needs to collect and assess additional cost information on options available and test different PEBES formats. Our work suggests that improving PEBES will demand attention from SSA’s senior leadership. For example, how best to balance the public’s need for information with the problems resulting from providing too much information are too difficult and complex to resolve without senior-level SSA involvement. Mr. Chairman, this concludes my formal remarks. I would be happy to answer any questions from you and other members of the Subcommittee. Thank you. For more information on this testimony, please call Diana S. Eisenstat, Associate Director, Income Security Issues, at (202) 512-5562 or Cynthia M. Fagnoni, Assistant Director, at (202) 512-7202. Other major contributors include Evaluators Kay Brown, Nora Perry, and Elizabeth Jones. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the Social Security Administration's (SSA) Personal Earnings and Benefit Estimate Statement (PEBES). GAO noted that: (1) the public has reacted favorably to unsolicited PEBES, and SSA has improved the statement in response to public feedback; (2) the public generally feels that the statement is a valuable tool for retirement planning, but the statement does not clearly convey its purpose and related information on SSA programs and benefits; (3) PEBES weaknesses have resulted from its piecemeal development and the lack of testing for comprehension; (4) there is no consensus on the best model for PEBES; (5) SSA plans to redesign PEBES only if the redesign results in lower printing costs; (6) this approach fails to recognize the hidden costs arising from the need to answer public inquiries about statement information and the undermining of public confidence in SSA programs by the statement's poor design; (7) SSA needs to improve PEBES layout and design and simplify certain explanations, obtain more detailed feedback from its frontline workers, conduct comprehension tests, and consider alternative statement formats; and (8) SSA senior management attention is needed to ensure the success of the statement initiative by redesigning PEBES to present benefits information more effectively.
The Coast Guard is a multimission, maritime military service within DHS. The Coast Guard has a variety of responsibilities including port security and vessel escort, search and rescue, and polar ice operations. To carry out these responsibilities, the Coast Guard operates a number of vessels, aircraft, and information technology programs. Since 2001, we have reviewed the Deepwater Program and reported to Congress, DHS, and the Coast Guard on the risks and uncertainties inherent with this program. In our July 2010 report, we found that DHS and Coast Guard acquisition policies and processes continued to evolve, further establishing the Coast Guard as systems integrator, and that the Coast Guard continued to improve its acquisition workforce and develop means to further reduce vacancies. We also found that as the Coast Guard’s understanding of the assets evolved, achievement of the DHS-approved May 2007 acquisition program baseline of $24.2 billion for the Deepwater Program was not feasible due to cost growth and schedule delays. We concluded that while the Coast Guard had deepened its understanding of the resources needed and capabilities required on an asset level, the Coast Guard had not revalidated its system-level requirements and lacked the analytical framework needed to inform Coast Guard and DHS decisions about asset trade-offs in the future. At the start of the Deepwater Program in the late 1990s, the Coast Guard chose to use a system-of-systems acquisition strategy. A system-of- systems is a set or arrangement of assets that results when independent assets are integrated into a larger system that delivers unique capabilities. The Coast Guard contracted with ICGS in June 2002 to be the systems integrator for Deepwater and provided ICGS with broad, overall performance specifications—such as the ability to interdict illegal immigrants—and ICGS determined the assets needed and their specifications. According to Coast Guard officials, ICGS submitted and priced its proposal as a package; that is, the Coast Guard bought the entire solution and could not reject any individual component. In 2002, the Coast Guard conducted a performance gap analysis that determined the Deepwater fleet as designed by ICGS would have significant capability gaps in meeting emerging mission requirements following the September 11, 2001, terrorist attacks. The Coast Guard decided, due to fiscal constraints, not to make significant changes to the ICGS planned Deepwater fleet, but did approve several asset capability changes. Following these changes, the Coast Guard submitted a revised cost, schedule, and performance baseline for the overall Deepwater Program to DHS in November 2006. The new baseline established the total acquisition cost of the ICGS solution at $24.2 billion and projected the Coast Guard would complete the acquisition in 2027. DHS approved the baseline in May 2007, shortly after the Coast Guard—acknowledging that it had relied too heavily on contractors to do the work of the government and that government and industry had failed to control costs—announced its intention to take over the role of systems integrator. With limited insight into how ICGS’s planned fleet would meet overall mission needs, the Coast Guard has acknowledged challenges in justifying the proposed capabilities and making informed decisions about possible trade-offs. In October 2008, the capabilities directorate initiated a fleet mix analysis intended to be a fundamental reassessment of the capabilities and mix of assets the Coast Guard needs to fulfill its Deepwater mission. As we reported last year, officials stated that this analysis did not impose fiscal constraints on the outcome and therefore, the results were unfeasible. As a result of discussions with DHS, the Coast Guard started a second, cost-constrained analysis—fleet mix analysis phase 2. Figure 1 provides a time line of key events in the Deepwater Program. Key directorates involved in the management of the Deepwater Program include the capabilities, resources, C4 and information technology, and acquisition directorates. Most of the Deepwater assets are considered major acquisitions, as outlined in the Coast Guard’s Major Systems Acquisition Manual. Acquisitions with life-cycle cost estimates equal to or greater than $1 billion are considered level I, and those with cost estimates from $300 million to less than $1 billion are considered level II. These major acquisition programs are to receive oversight from DHS’s acquisition review board, which is responsible for reviewing acquisitions for executable business strategies, resources, management, accountability, and alignment with strategic initiatives. The Coast Guard provides oversight to programs that have life-cycle cost estimates less than $300 million (level III). Table 1 describes in more detail the assets the Coast Guard plans to buy or upgrade under the Deepwater Program, the associated investment level if known, and planned and delivered quantities. DHS’s acquisition review board not only provides oversight for major acquisition programs, but also supports the department’s Acquisition Decision Authority in determining the appropriate direction for an acquisition at key Acquisition Decision Events (ADE). At each ADE, the Acquisition Decision Authority approves acquisitions to proceed through the acquisition life-cycle phases upon satisfaction of applicable criteria. Additionally, Component Acquisition Executives at the Coast Guard and other DHS components are responsible in part for managing and overseeing their respective acquisition portfolios, as well as approving level III systems acquisitions. The DHS four-phase acquisition process is:  Need phase—define a problem and identify the need for a new acquisition. This phase ends with ADE 1, which validates the need for a major acquisition program.  Analyze/Select phase—identify alternatives and select the best option. This phase ends with ADE-2A, which approves the acquisition to proceed to the obtain phase and includes the approval of the acquisition program baseline.  Obtain phase—develop, test, and evaluate the selected option and determine whether to approve production. During the obtain phase, ADE-2B approves a discrete segment if an acquisition is being developed in segments and ADE-2C approves low-rate initial production. This phase ends with ADE-3 which approves full-rate production.  Produce/Deploy/Support phase—produce and deploy the selected option and support it throughout the operational life cycle. Figure 2 depicts where level I and II Deepwater assets currently fall within these acquisition phases and decision events. The Deepwater Program as a whole continues to exceed the cost and schedule baselines approved by DHS in May 2007, but several factors preclude a solid understanding of the true cost and schedule of the program. The Coast Guard has developed baselines for some assets, most of which have been approved by DHS, that indicate the estimated total acquisition cost could be as much as $29.3 billion, or about $5 billion over the $24.2 billion baseline. But additional cost growth is looming because the Coast Guard has yet to develop revised baselines for all the Deepwater assets, including the Offshore Patrol Cutter (OPC)—the largest cost driver in the Deepwater Program. In addition, the Coast Guard’s most recent 5-year budget plan, included in DHS’s fiscal year 2012 budget request, indicates further cost and schedule changes not yet reflected in the asset baselines. The reliability of the cost estimates and schedules for selected assets is also undermined because the Coast Guard did not follow key best practices for developing these estimates. Coast Guard and DHS officials agree that the annual funding needed to support all approved Deepwater baselines exceeds current and expected funding levels in this fiscal climate. This contributes to churn in program baselines when programs are not able to execute schedules as planned. The Coast Guard’s acquisition directorate has developed several actionitems to help address this mismatch by prioritizing acquisition program needs, b ut these action items have not been adopted across the Coast Guard. The estimated total acquisition cost of the Deepwater Program, based on approved program baselines as of May 2011, could be as much as approximately $29.3 billion, or about $5 billion more than the $24.2 b baseline approved by DHS in 2007. This represents an increase of approximately 21 percent. As of May 2011, DHS had approved eight revised baselines from the 2007 program and the Coast Guard had approved two based on a delegation of approval authority from DHS. The increase in acquisition cost for these programs alone is about 43 percent. Table 2 compares each Deepwater asset’s acquisition cost estim ate from the 2007 program baseline with revised baselines, if available. illion Coast Guard officials stated that some of the approved acquisition program baselines fall short of the true funding needs. This not only exacerbates the uncertainty surrounding the total cost of the Deepwater acquisition, but also contributes to the approved Deepwater Program no longer being achievable. For example, the NSC program’s approved baseline reflects a total acquisition cost of approximately $4.7 billion. However, Congress has already appropriated approximately $3.1 billion for the program and the Coast Guard’s fiscal years 2012-2016 capital investment plan indicates an additional $2.5 billion is needed through fiscal year 2016 for a total of $5.6 billion to complete the acquisition. This would represent an increase of approximately 19 percent over the approved acquisition cost estimate for eight NSCs. According to section 575 of Title 14 of the U.S. Code, the Commandant must submit a report to Congress no later than 30 days after the Chief Acquisition Officer of the Coast Guard becomes aware of a likely cost overrun for any level I or level II acquisition program that will exceed 15 percent. If the likely cost overrun is greater than 20 percent, the Commandant must include a certification to Congress providing an explanation for continuing the project. Senior Coast Guard acquisition officials stated that they cannot corroborate a total cost of $5.6 billion for the NSC program, or a cost increase of 19 percent, because the Coast Guard has not yet completed a life-cycle cost analysis for the program. However, these officials stated that a certification to Congress for the NSC program is pending as well as one for the MPA program. We previously reported several schedule delays for assets based on the revised baselines and noted that as the Coast Guard reevaluates its baselines, it gains improved insight into the final delivery dates for all of the assets. While the Coast Guard’s revised baselines identify schedule delays for almost all of the programs, these baselines do not reflect the extent of some of these delays as detailed in the Coast Guard’s fiscal years 2012-2016 capital investment plan. For example, the MPA’s revised baseline has final asset delivery in 2020—a delay of 4 years from the 2007 baseline—but the capital investment plan indicates final asset delivery in 2025—an additional 5-year delay not reflected in the baseline. Coast Guard resource officials responsible for preparing this plan acknowledged that the final asset delivery dates in most of the revised baselines are not current. The forthcoming delays identified in the fiscal years 2012-2016 capital investment plan indicate that the final asset delivery dates approved in the 2007 Deepwater baseline are no longer achievable for most assets. Figure 3 shows delays in final asset delivery dates according to (1) the 2007 baseline; (2) the asset’s revised baseline, if available; and (3) the fiscal years 2012-2016 capital investment plan submitted to Congress. Our analysis of selected assets’ life-cycle cost estimates found that the Coast Guard did not fully follow best practices for developing reliable life- cycle cost estimates, which is at the core of successfully managing a project within cost and affordability guidelines. The Major Systems Acquisition Manual cites our Cost Estimating and Assessment Guide as a source for guidance and best practice information. Furthermore, we found that the Coast Guard is not receiving reliable schedules for selected assets from its contractors, which should be inputs into a programwide schedule. We reviewed the MPA program’s life-cycle cost estimate and schedule because this program has the highest life-cycle cost estimate of all Deepwater assets and has experienced schedule delays. We also reviewed the NSC program’s schedule because this program has the second highest life-cycle cost estimate and has also experienced schedule delays. The Coast Guard was not able to provide us with a current NSC life-cycle cost estimate to review because the program is revising its estimate, an effort that was directed in a December 2008 DHS acquisition decision memorandum. Therefore, we reviewed the C4ISR program’s life-cycle cost estimate because the estimate was complete, but the program did not yet have a DHS-approved acquisition program baseline and there was uncertainty concerning the direction of the program. Reliable life-cycle cost estimates reflect four characteristics. They are (1) well-documented, (2) comprehensive, (3) accurate, and (4) credible. These four characteristics encompass 12 best practices for reliable program life-cycle cost estimates that are identified in appendix III. The results of our review of the MPA and C4ISR life-cycle cost estimates are summarized in figure 4. Appendix III contains a more detailed discussion of the extent to which the two cost estimates met the four best practices criteria. While both life-cycle cost estimates addressed elements of best practices, their effectiveness is limited because they do not reflect the current program and have not been updated on a regular basis, which is considered a best practice for an accurate cost estimate. For example, the MPA life-cycle cost estimate was completed in August 2009. While the Coast Guard has obtained actual costs, the program office has not updated the formal estimate with these actual costs. This limits the program’s ability to analyze changes in program costs and provide decision makers with accurate information. The Coast Guard did include a sensitivity analysis to identify cost drivers, but this analysis did not examine possible effects of funding cuts—an area of risk for the MPA program. The Coast Guard completed the C4ISR life-cycle cost estimate in December 2009. DHS reviewed this estimate, but did not validate it. We found that this estimate was minimally credible for several reasons, including that the program did not complete a sensitivity analysis of cost drivers—even though cost drivers were identified and major funding cuts occurred which led to a program breach. C4ISR program officials told us that they are currently revising the 2009 estimate because it is no longer reflective of the current program. Coast Guard C4ISR officials agreed with our analysis and stated that they plan to incorporate the best practices going forward. We found that neither the MPA nor the NSC programs are receiving schedule data from their contractors that fully meet schedule best practices. Our guidance identifies nine interrelated scheduling best practices that are integral to a reliable and effective master schedule. For example, if the schedule does not capture all activities, there will be uncertainty about whether activities are sequenced in the correct order and whether the schedule properly reflects the resources needed to accomplish work. MPA and NSC contractor schedule data should feed into each program’s integrated master schedule in order to reliably forecast key program dates. However, the NSC program does not have an integrated master schedule that would account for all planned government and contractor efforts for the whole program. The program is currently managing a schedule for only the third cutter out of a total planned eight cutters. The MPA program does have an integrated master schedule which it updates with the contractor schedule data. However, our assessment found the contractor’s schedule for aircraft 12-14 is unreliable. Because an integrated master schedule is intended to connect all government and contractor schedule work, unreliable contractor schedule data will result in unreliable forecasted dates within the integrated master schedule. Figure 5 summarizes the results of our review of the MPA contractor’s schedule for aircraft 12-14 and the NSC 3 schedule. Appendix IV includes a detailed discussion of our analysis. As shown above, the MPA contractor’s schedule for aircraft 12-14 did not substantially or fully meet any of the nine best practices. Based on our discussions with the program manager, this condition stems, in part, from a lack of program management resources, as the program office does not have trained personnel to create and maintain a schedule. In addition, while program officials stated that they do conduct meetings to provide oversight on production and delivery schedules, it does not appear that management is conducting proper oversight of existing schedule requirements. Program officials stated that they were not interested in obtaining a detailed schedule, even though it is a deliverable in the production contract, because the MPA contract is fixed price and the contractor’s past delivery has been good. However, regardless of contract type, best practices call for a schedule to include all activities necessary for the program to be successfully completed. After we raised concerns about the Coast Guard paying for a detailed schedule that the program office does not plan to request or use, program officials told us that the contractor has been very responsive to Coast Guard’s subsequent direction to update the schedule to incorporate best practices. They said the Coast Guard has modified the schedule reporting procedures so that the contractor will provide monthly reporting of the data. The NSC 3 schedule substantially met two best practices and partially met six best practices, but the program office did not conduct a schedule risk analysis to predict a level of confidence in meeting the completion date. The purpose of the analysis is to develop a probability distribution of possible completion dates that reflect the project and its quantified risks. This analysis can help project managers understand the most important risks to the project and focus on mitigating them. A schedule risk analysis will also calculate schedule reserve, which can be set aside for those activities identified as high risk. Without this reserve, the program faces the risk of delays to the scheduled completion date if any delays were to occur on critical path activities. Senior Coast Guard acquisition officials stated that the Coast Guard has high confidence in the projected delivery date and uses a full range of project tools, including the schedule, to project the delivery date. Collectively though, we found that the weaknesses in not meeting the nine best practices for the NSC 3 program integrated master schedule increase the risk of schedule slippages and related cost overruns and make meaningful measurement and oversight of program status and progress, as well as accountability for results, difficult to achieve. Coast Guard and DHS officials agreed that the annual funding needed to support all approved Deepwater acquisition program baselines exceeds current and expected funding levels, particularly in this constrained fiscal climate. For example, Coast Guard acquisition officials stated that up to $1.9 billion per year would be needed to support the approved Deepwater baselines, but they expect Deepwater funding levels to be closer to $1.2 billion annually over the next several years. Therefore the Coast Guard is managing a portfolio—which includes many revised baselines approved by DHS—that is expected to cost more than what its annual budget will likely support. Our previous work on Department of Defense (DOD) acquisitions shows that when agencies commit to more programs than resources can support, unhealthy competition for funding is created among programs. This situation can lead to inefficient funding adjustments, such as moving money from one program to another or deferring costs to the future. When a program’s projected funding levels are lower than what the program was previously projected to receive, the program is more likely to have schedule breaches and other problems, as the program can no longer remain on the planned schedule. From September-October 2010, the Coast Guard reported potential baseline breaches to DHS for the C4ISR, HC-130H, and HH-60 programs that were caused, at least in part, by reduced funding profiles in the fiscal years 2011-2015 capital investment plan. For example, in the fiscal years 2008 and 2009 capital investment plans, the Coast Guard had anticipated allocating 20-27 percent of its planned $1.1 billion fiscal year 2011 Deepwater budget to its aviation projects. In its actual fiscal year 2011 budget request, however, the Coast Guard only allocated about 9 percent of the $1.1 billion to aviation projects. The percentage of dollars allocated to surface projects increased—largely driven by an increase of dollars allocated to the FRC program. Figure 6 illustrates how the allocation of acquisition, construction, and improvements dollars in the Coast Guard’s budget request in fiscal years 2008, 2009, 2010, and 2011 differed from prior year plans. In the October 2010 Blueprint for Continuous Improvement (Blueprint), signed by the Commandant, the Coast Guard’s Assistant Commandant for Acquisition identified the need to develop and implement effective decision making to maximize results and manage risk within resource constraints. The Blueprint outlines several action items, expected to be completed by the end of fiscal year 2011, to accomplish this goal. The action items include:  promoting stability in the Coast Guard’s capital investment plan by measuring the percentage of projects stably funded year to year in the plan,  ensuring acquisition program baseline alignment with the capital investment plan by measuring the percentage of projects where the acquisition program baselines fit into the capital investment plan, and  establishing Coast Guard project priorities. Acquisition officials responsible for implementing the Blueprint action items acknowledged that successful implementation requires buy-in from leadership. Senior resource directorate officials responsible for capital investment planning told us that the action items in the Blueprint are “noble endeavors,” but that the directorates outside of the acquisition directorate are not held responsible for accomplishing them. According to the Major Systems Acquisition Manual, the Component Acquisition Executive (Vice-Commandant), to whom both the acquisition and resource directorates report, is responsible for establishing acquisition processes to track the extent to which requisite resources and support are provided to project managers. In addition to the acquisition directorate’s recognition of the need to establish priorities to address known upcoming resource constraints, in August 2010, the Coast Guard’s flag-level Executive Oversight Council— chaired by the Assistant Commandant for Acquisition with representatives from other directorates—tasked a team to recommend strategies to revise acquisition program baselines to better align with annual budgets. This acknowledgment that program baselines must be revised to fit fiscal constraints, however, is not reflected in the Coast Guard’s most recent capital investment plan. Table 3 presents planned funding projections for Deepwater assets as outlined in the fiscal years 2012-2016 capital investment plan. With the exception of fiscal year 2012, the Coast Guard is planning for funding levels well above the expected funding level of $1.2 billion. This outyear funding plan seems unrealistic, especially in light of the rapidly building fiscal pressures facing our national government and DHS’s direction for future budget planning. To illustrate, in fiscal year 2015, the Coast Guard plans to request funding for construction of three major Deepwater surface assets: NSC, OPC, and FRC, but the Coast Guard has never requested funding for construction of three major Deepwater surface assets in the same year before. In a recent testimony, the Commandant of the Coast Guard stated that the plan for fiscal year 2015 reflects the Coast Guard’s actual need for funding in that year. If program costs and schedules are tied to this funding plan and it is not executable, these programs will likely have schedule and cost breaches. When a program has a breach, the program manager must develop a remediation plan that explains the circumstances of the breach and propose corrective action and, if required, revise the acquisition program baseline. The Coast Guard continues to strengthen its acquisition management capabilities. As lead systems integrator, the Coast Guard is faced with several decisions to help ensure that the promised capabilities of assets still in design are achieved. For example, whether or not the planned system-of-systems design is achievable largely depends on the Coast Guard’s ability to make important decisions regarding the design of the C4ISR program, as the Coast Guard has continued to define and redefine its strategy for this program since 2007. For those assets already under construction and operational, preliminary tests have yielded mixed results and identified issues that need to be addressed prior to upcoming test events. As part of its role as lead systems integrator, the Coast Guard is gaining a better understanding of each asset’s cost, schedule, and technical risks, but this information is not always fully conveyed in the Coast Guard’s quarterly reports to Congress. The Coast Guard continues to strengthen its acquisition management capabilities in its role of lead systems integrator and decision maker for Deepwater acquisitions. We recently reported that the Coast Guard updated its Major Systems Acquisition Manual in November 2010 to better reflect best practices, in response to our prior recommendations, and to more closely align its policy with the DHS Acquisition Management Directive 102-01. We also reported that according to the Coast Guard, it currently has 81 interagency agreements, memorandums of agreement, and other arrangements in place, primarily with DOD agencies, which helps programs leverage DOD expertise and contracts. To further facilitate the acquisition process, the Coast Guard’s Acquisition Directorate has increased the involvement of the Executive Oversight Council as a structured way for flag-level and senior executive officials in the requirements, acquisition, and resources directorates, among others, to discuss programs and provide oversight on a regular basis. In addition to these efforts to strengthen its management capabilities, the Coast Guard has significantly reduced its relationship with ICGS. ICGS’s remaining responsibilities include completing construction of the third NSC and a portion of the C4ISR project. In moving away from ICGS, the Coast Guard has awarded fixed-price contracts directly to prime contractors. For example, since our last report in July 2010, the Coast Guard: (1) awarded a sole source fixed price contract for the fourth NSC and long lead materials for the fifth NSC to Northrop Grumman Shipbuilding Systems, (2) exercised fixed price options for four additional FRCs on the contract with Bollinger Shipyards, and (3) awarded a fixed price contract to EADS for three MPAs with options for up to six additional aircraft, following a limited competition in which EADS made the only offer. In addition, the Coast Guard has developed acquisition strategies intended to inject competition into future procurements where possible. For example, the Coast Guard is planning to buy a “reprocurement data licensing package” from Bollinger Shipyards. This information package, according to project officials, is expected to provide the Coast Guard with the specifications to allow full and open competition of future FRCs. Our previous work has shown that when the government owns technical specifications, its does not need to rely on one contractor to meet requirements. As part of its acquisition strategy for the OPC, the Coast Guard plans to award multiple preliminary design contracts and then select the best value contract design for a detailed design and production contract. This planned acquisition strategy will also include an option for a data and licensing package, similar to the FRC. In May 2011, the Coast Guard released a draft of the OPC specifications for industry review in advance of releasing a request for proposals, currently planned to occur in the fall of 2011. Lastly, the Coast Guard is in the process of holding a competition for the over-the-horizon cutter small boat through a small business set-aside acquisition approach. Several Deepwater assets remain in the “analyze/select” or “need” phases of the Coast Guard’s acquisition process which involve decisions that affect the system-of-systems design. At the start of our review, these included portions of the C4ISR project, OPC, cutter small boats, unmanned aircraft system, and portions of the HH-60 helicopter. The Deepwater Program was designed to improve the detection and engagement of potential targets in the maritime domain. Key to the Coast Guard’s success is engaging targets of interest, such as a terrorist activity within the U.S. maritime domain. To do this, the Coast Guard goes through a process of surveying the maritime domain, detecting and classifying targets, and then responding to the situation. The planned system-of-systems design connects the Deepwater assets through a single command and control architecture—C4ISR—that is designed to increase the probability of mission success by improving the accuracy and speed of this process. For example, as envisioned, the MPA would conduct more efficient searches in conjunction with other assets. During a search for a missing vessel, the MPA would receive information from the operational commander regarding the location of the distress signal and then communicate search information back to the commander and with other on-scene Coast Guard assets. The commander and the MPA could then increase the speed of the response by locating the closest available cutter and informing it of injuries and other issues. Figure 7 depicts the Deepwater concept of using information technology to more quickly and successfully execute missions. To achieve the system-of-systems design, the Coast Guard planned for C4ISR to be the integrating component of Deepwater. This was expected to improve mission performance by increasing the success rate and frequency of engaging targets. However, the $600 million ICGS- developed Coast Guard command and control system, currently on the NSC, MPA, and HC-130J, does not achieve the system-of-systems vision. After taking over as lead systems integrator in 2007, the Coast Guard has changed its C4ISR strategy multiple times in an effort to achieve a common software system for all Deepwater assets that facilitates data sharing between these assets and external partners. But as the Coast Guard continues to change its strategy, decisions remain regarding how to achieve this promised capability in a feasible manner. These decisions relate to realizing the overall goal of sharing data between all of the Deepwater assets, creating and updating acquisition documents, and developing a strategy for designing and managing the C4ISR technical baselines. The Coast Guard has yet to achieve the promised capability of an interoperable system with communication and data sharing between all assets and may limit some of the planned capability. According to the approved Deepwater mission needs statement, data sharing, centralized networks, and information from sensors are critical for the Coast Guard and DHS to achieve mission performance in a resource-constrained world. While according to information technology officials, the Coast Guard has voice communications between assets, the currently operational Deepwater assets— NSC, MPA, HC-130J, HH-60, and HH-65—do not yet have the capability to fully share data with each other or commanders. In addition, the Coast Guard has not fully established a centralized network for C4ISR, creating communications problems. For example, the NSC and MPA use classified systems to record and process C4ISR data while the HC-130J and HC-130H have unclassified systems. According to operators, sharing data gathered by the MPA during the Deepwater Horizon oil spill incident was difficult because all information gathered by the MPA was maintained on a classified system. According to senior officials, the Coast Guard recognizes that classification issues inhibit fully sharing data and is working to address these issues through changes to Coast Guard policies, which have not been finalized. Furthermore, it is unclear whether or not full data interoperability between all assets remains a goal for the Deepwater program. Overall, according to the Coast Guard’s recent cost estimating baseline document, the C4ISR system will be installed on only 127 air and surface assets, which is fewer than half of the approximately 300 assets within the Deepwater acquisition. For example, senior acquisition officials stated that the helicopters are not going to be equipped with the C4ISR software that is planned to enable data sharing with commanders and other assets, but this has not yet been reflected in project documentation. A senior official with the information technology directorate questioned the extent to which the level of shared data communications as set forth in the mission needs statement would help the Coast Guard more efficiently achieve mission success because some Coast Guard assets, such as the cutters, rarely work in tandem. Additionally, project officials stated that the vision of full data-sharing capability between assets, depicted above, is transforming into a “hub and spoke” model where assets share data with shore-based command centers that maintain the operating picture and maritime awareness; this also has yet to be detailed in project documentation. Given these uncertainties, the Coast Guard does not have a clear vision of the C4ISR required to meet its missions. The Coast Guard is also currently managing the C4ISR program without key acquisition documents, including an acquisition program baseline that reflects the planned program, a credible life-cycle cost estimate, and an operational requirements document for the entire program.  The Coast Guard has replanned the C4ISR project baseline multiple times since 2007, which, under ICGS, contained a high-level description of the system with no requirements document to provide further detail. In November 2009, the Coast Guard submitted a revised baseline to DHS that provided some additional detail of the planned capabilities, including capabilities designed to protect the homeland, but also delayed development of these capabilities due to concerns about the reliability and affordability of the ICGS system. DHS approved the baseline in February 2011, but by that time it was out of date. For example, according to this baseline, the Coast Guard was planning to reach a milestone for developing improved capabilities on selected assets in early fiscal year 2010—an event that was indefinitely deferred before the baseline was approved and is now scheduled to take place no sooner than 2017. Coast Guard officials stated that a revised acquisition program baseline is currently being drafted.  A key input into the acquisition program baseline is a credible life- cycle cost estimate, but the Coast Guard is currently revising the C4ISR estimate and officials stated that the current cost estimate no longer reflects the current status of the program.  An operational requirements document for the entire project has not yet been completed; project officials told us that requirements documents for portions of the system are in the review process or under development. However, the documents in review do not include C4ISR requirements for the OPC. C4ISR project officials stated that those requirements are included in the OPC’s operational requirements document, but acknowledged that these requirements are vague. In addition to inadequate or incomplete acquisition documentation, the Coast Guard also lacks technical planning documents necessary to both articulate the vision of a common C4ISR baseline—a key goal of the C4ISR project—and to guide the development of the C4ISR system in such a way that the system on each asset remains true to the vision. While Coast Guard officials told us that their goal is still a common software baseline, we have identified at least four software variants in operation or under development but whose commonality is not clear: the legacy Coast Guard system prior to Deepwater, the ICGS-developed Coast Guard command and control system (ICGS system),  a Coast Guard-developed command and control system called  a forthcoming Seawatch-ICGS hybrid system for the NSC. The Coast Guard continues to maintain a legacy C4ISR system which is operational on the 210-foot and 270-foot cutters and maintains the ICGS system on the NSC, MPA, and HC-130J. The Coast Guard also planned to put the ICGS system on the 110-foot patrol boats that were to be converted to 123-foot boats. According to FRC program officials, after this conversion failed for structural reasons and the FRC program was accelerated to offset the loss of planned patrol boat capability, the Coast Guard planned to use the legacy C4ISR system for the FRC. However, due to obsolescence of the legacy system, the Coast Guard’s information technology directorate developed a new system called Seawatch for FRC. The Coast Guard has since decided to also incorporate Seawatch into the upgrades to the original ICGS system for NSCs five through eight and plans to do so for NSCs one through four, but this effort is currently not funded. Until this Seawatch-ICGS hybrid system is installed on the first four NSCs, the Coast Guard will have to maintain two systems for the NSC. Further, according to C4ISR project officials, the Coast Guard is currently analyzing the extent to which the Seawatch-ICGS hybrid system meets the requirements for the OPC. The C4ISR project has yet to identify a software system that will meet the requirements of the HC-130H, HH-60, and HH-65 aircraft and that is also compatible with surface assets. The Coast Guard is redesigning the ICGS system currently on the MPA and HC-130J to replace some parts that are now obsolete so that the Coast Guard can hold a competition for the system. The goal is to develop a common software baseline for the MPA and the HC-130J to address variations in the ICGS system currently on these assets. Once the Coast Guard finishes developing this common software baseline for the MPA and HC-130J, it will be a new baseline in addition to the four baselines identified above. While some officials in the capabilities directorate told us that Seawatch could become the common command and control system for the Coast Guard, Seawatch system developers in the information technology directorate told us that Seawatch is not currently suitable for aviation assets. Table 4 shows the software system currently installed on each asset and the anticipated system for the asset. According to Coast Guard information technology officials, the abundance of software baselines could increase the overall instability of the C4ISR system and complexity of the data sharing between assets. Moreover, additional baselines may continue to proliferate because each asset is now responsible for managing and funding technology obsolescence as opposed to having a Coast Guard-wide technology obsolescence prevention program. From 2008 to 2010, the Coast Guard had funded a technology obsolescence program to avoid costly C4ISR system replacements by proactively addressing out-of-date technology. For example, program officials stated that the Coast Guard established a uniform software baseline for 12 MPA mission system pallets under this program. The Coast Guard is currently developing a policy to manage obsolete technology now that the technology obsolescence program is no longer funded. Important decisions remain to be made regarding the OPC, the largest cost driver in the Deepwater program. DHS approved the OPC’s requirements document in October 2010 despite unresolved concerns about three key performance parameters—seakeeping, speed, and range—that shape a substantial portion of the cutter’s design. For example, DHS questioned the need for the cutter to conduct full operations during difficult sea conditions, which impact the weight of the cutter and ultimately its cost. The Coast Guard has stated that limiting the ability to conduct operations during difficult sea conditions would preclude operations in key mission areas. While it approved the OPC requirements document, DHS at the same time commissioned a study to further examine these three key performance parameters. According to Coast Guard officials, the study conducted by the Center for Naval Analysis found that the three key performance parameters were reasonable, accurate, and adequately documented. By approving the operational requirements document before these factors were resolved, DHS did not ensure that the cutter was affordable, feasible, and unambiguous and required no additional trade-off decisions, as outlined in the Major Systems Acquisition Manual. Our previous work on DHS acquisition management found that the department’s inability to properly execute its oversight function has led to cost overruns, schedule delays, and assets that do not meet requirements. In addition to the three performance parameters discussed above, other decisions, with substantial cost and capability implications for the OPC, remain unresolved. For example, it is not known which C4ISR system will be used for the OPC, whether the cutter will have a facility for processing classified information, and whether the cutter will have air search capabilities. The Coast Guard’s requirements document addressed these capabilities but allowed them to be removed if design, cost, or technological limitations warrant. According to Coast Guard officials, remaining decisions must be made before the acquisition program baseline is approved as part of the program’s combined acquisition decision event 2A/B and the request for proposals is issued, both of which are planned for the fall of 2011. In addition, following the approval of the requirements document, the Coast Guard formed a ship design team tasked with considering the affordability and feasibility of the OPC. This team has met with Assistant Commandants from across the Coast Guard on several occasions to discuss issues that impact the affordability and feasibility of the cutter, including, among others, the size of the living quarters, the aviation fuel storage capacity, and the range of the cutter. The Coast Guard has stated that affordability is a very important aspect of the OPC project and that the request for proposal process will inform the project’s efforts to balance affordability and capability. The cutter small boats project was delayed when the initial ICGS plan was halted due to unrealistic requirements that we have reported on in the past. The Coast Guard has since made decisions on providing small boats for the NSC, but key decisions remain regarding the Coast Guard’s overall strategy for buying a standard cutter small boat fleet, including quantities. According to project officials, a standard cutter boat fleet is an important capability for the Coast Guard because it permits shared training and maintenance and allows for sharing small boats among the larger cutters, potentially reducing acquisition and maintenance costs. There are two types of cutter small boats that the Coast Guard plans to use to engage targets—a 36-foot version launched from the NSC and potentially the future OPC and a 25-foot version planned for the three largest Deepwater cutters: NSC, OPC, and FRC. Following the failure of ICGS’ cutter small boats, the Coast Guard identified requirements for the cutter small boat project to supply the three large cutters with at least 135 small boats. However, in August 2010, DHS changed the project to a nonmajor acquisition after the Coast Guard downsized the scope of the project to only 27 cutter small boats—which includes a mix of 25-foot and 36-foot boats—for the NSC, thus lowering the life-cycle cost for the project. As a result, the program is no longer subject to DHS’s review or independent testing. Project officials told us that despite this change in quantities, a standard cutter boat for all three cutters nevertheless remains a key goal; in fact, the current 25-foot small boat project plan recognizes the potential for the project to buy up to 101 small boats, which includes the ability for other DHS components to buy boats off of that contract. The project plan for the 36-foot boats is not yet complete. If the Coast Guard intends to still buy a standard cutter boat fleet, depending on the mix, the life-cycle cost of the project could mean the project is actually a major system acquisition subject to DHS review. The UAS was envisioned as a key component of the Deepwater system that would enhance surveillance capability on board the NSC and OPC and also from land. Congress has appropriated over $100 million since 2003 to develop an unmanned aerial vehicle, but the Coast Guard terminated the program due to cost increases and technical risks in June 2007. In February 2009, DHS approved a strategy for the Coast Guard to acquire UASs, but the Coast Guard has not yet decided what specific solutions are required to perform operations. Lead asset delivery was originally scheduled for 2008, but the Coast Guard is waiting until Navy technology for cutter-based UASs advances and is partnering with Customs and Border Protection for use of the maritime land-based UAS, Guardian. There are some indications that the Coast Guard UAS program will continue to incur substantial delays. For example, there is currently no funding for the program in the Coast Guard’s fiscal years 2012-2016 capital investment plan and the Coast Guard does not expect the C4ISR software for the UAS to share data with other assets to be ready for operations until 2024. Until the Coast Guard buys UASs, the planned capability of the major cutter fleet is limited. Without a UAS, for instance, the DHS Inspector General estimates that the aerial surveillance capability of the NSC is reduced from 58,160 square nautical miles to 18,320 square nautical miles, a 68 percent decline. The HH-60 project office is continuing to make progress upgrading the Coast Guard’s largest helicopter, but decisions remain concerning the extent to which the Coast Guard will use the helicopter for surveillance. According to the current acquisition program baseline, the Coast Guard plans to replace the existing weather radar on the HH-60 with a surface search radar to improve detection and classification capabilities. The project office originally planned to begin this work in fiscal year 2006, but is now planning to begin the work in fiscal year 2012. Officials at the Coast Guard’s Aviation Logistics Center, where the helicopter depot maintenance is conducted, stated that funding for the workforce currently conducting the upgrades on the HH-60 will expire in the summer of 2014. These officials expressed concern that if the Coast Guard delays surface search radar work further, there will be a loss of learning on the production line, leading to an increase in the cost of the project due to production restart costs. Furthermore, project officials told us that the Coast Guard is developing a preliminary-operational requirements document that will address requirements for the HH-60’s C4ISR capabilities. These remaining decisions for the HH-60 will shape the extent to which the helicopter shares information collected by the surface search radar with operational commanders and other Coast Guard assets. None of the Deepwater assets have completed initial operational test and evaluation, a major test event which identifies deficiencies by evaluating operational effectiveness during the execution of simulated operational missions. The NSC, MPA, and FRC are scheduled to complete this testing in fiscal years 2012 and 2013. The HC-130J will not undergo any operational testing or assessments by an independent operational test authority, and the other Deepwater assets are not yet scheduled to start this testing. In advance of this testing, the Coast Guard has completed preliminary tests for the NSC, MPA, and FRC, such as operational assessments, which the Coast Guard is using to mitigate risk and address problems during asset development prior to initial operational test and evaluation. The Coast Guard also conducts acceptance testing, which helps ensure that the functionality of the delivered asset meets contract requirements and may help demonstrate that it will meet defined mission needs. Using these tests, officials have identified issues that need to be addressed prior to initial operational testing on the following assets. During acceptance testing for the second NSC in October 2010, Coast Guard officials identified five key issues, also identified on NSC 1 in an operational assessment completed in September 2010: reliability and maintenance problems with the crane on the back of the cutter,  an unsafe ammunition hoist for the main gun,    an impractical requirement for using the side rescue door in difficult instability with the side davit for small boat launch, insufficient power to a key system used for docking the cutter, and sea conditions. Senior acquisition directorate officials stated that there are currently workarounds for some of these issues and the cutters do meet contractual requirements. Program officials added that funding and design changes have yet to be finalized for these five issues and in some cases, correcting these issues will likely require costly retrofits. In January 2011, Coast Guard officials canceled the Aircraft Ship Integrated Secure and Traverse (ASIST)—a system intended to automate the procedure to land, lock down, and move the HH-65 helicopter from the deck to the hangar on the NSC—after significant deficiencies were identified during testing conducted by the U.S. Naval Air Warfare Center. Examples of deficiencies included increased pilot workload during landing, excessive stress on the helicopter components as the aircraft moved across the deck into the hangar, and failure to reduce the number of people needed to secure the helicopter as the system was designed to do. In addition, testing officials determined that the system could cause injury to the aircrew because the landing operator could not communicate with the pilot in a timely manner, and the system demonstrated unpredictable failures to locate the aircraft while it was hovering over the NSC’s flight deck. The ASIST system was identified by ICGS as a solution to a Coast Guard requirement. Several Coast Guard officials told us that the Coast Guard was aware of potential problems with ASIST as early as 2007, but the Coast Guard moved forward with it until testing was complete. The Coast Guard invested approximately $27 million to install the system on three NSCs, purchase long lead materials for the fourth NSC, and modify one HH-65 helicopter for the test event. The Coast Guard is now exploring solutions in use by the Navy to replace the system. For the two operational NSCs, officials stated that operators secure the HH-65 using legacy cutter technology. In a May 2009 operational assessment, an independent test authority— the Navy’s Commander Operational Test and Evaluation Force—found that, while the MPA airframe provides increased capability for cargo and passenger transport, the C4ISR system on the aircraft’s mission system pallet is a significant area of risk. Deficiencies included poor performance of the two main sensors used to identify and track targets, need for system reboots that result in system downtime—which we observed during our visit with the pallet operators in Mobile, Alabama in January 2011—and a lack of training equipment. The operators told us that issues with these capabilities persist and that other aspects of the system prevent operators from working efficiently. For example, the operators stated that the screens on the pallet are too small for the number of applications that normally run simultaneously and the main camera needs to be held on target manually because it cannot automatically locate previously identified targets. Since our visit, the Coast Guard has installed a software upgrade which officials stated corrected several problems inherent with the previous version. DHS Test and Evaluation officials told us that the Coast Guard is not permitted to buy additional pallets until successful completion of initial operational testing, scheduled for September 2011. These officials told us that they were optimistic that testing would be successful. The MPA’s acquisition plan does not include a strategy to buy additional mission system pallets; currently, the Coast Guard has received all 12 of the pallets under contract with ICGS. According to officials, the Coast Guard is planning to seek a full-rate production decision for the MPA by the end of fiscal year 2012, at which point almost one-third of the planned 36 MPA airframes will have been purchased. Prior to a full-rate production decision, in accordance with the Major Systems Acquisition Manual, the program must have identified a preferred solution and an acquisition plan for buying the pallet. Currently, the Coast Guard is assessing how to buy future pallets. Options include continuing to buy the pallet directly from Lockheed Martin or conducting a full and open competition to determine if another vendor can build the pallet. Senior Coast Guard acquisition officials stated that they determined the Coast Guard does not have sufficient capability to build the pallet itself. The FRC program is planning to use the first cutter for initial operational test and evaluation. The original delivery date for the lead cutter was scheduled for January 2011, but that date has slipped to December 2011. Officials told us that the delay is due to a last minute design change, directed by the Coast Guard’s engineering and logistics technical authority, to enhance the structure of the cutter. An early operational assessment that reviewed design plans for the FRC was completed in August 2009 and identified 74 design issues, 69 of which were corrected during the assessment. Officials explained that they are confident in the reliability of the FRC design and do not expect any major operational issues to arise during initial operational testing and evaluation. In addition, program officials explained that the Coast Guard has used a lead vessel for initial operational test and evaluation in the past and is now also planning to conduct an operational assessment on the lead FRC to reduce risk. Officials from the Navy’s Commander Operational Test and Evaluation Force, however, stated that there are risks associated with using the first cutter for initial operational test and evaluation; operators are not as familiar with the system, the logistics enterprise may not be fully operational to support the asset, and enough time may not have passed to collect sufficient data on what operational issues need to be addressed prior to testing. The Coast Guard currently has 6 HC-130Js in operation, but the aircraft did not undergo any operational testing or assessments by an independent operational test authority. As we reported last year, DHS and Coast Guard had agreed that no further testing or documentation was necessary for the HC-130J because production of the aircraft was complete, and a report was developed that defines the aircraft’s performance by describing the demonstrations that have already been conducted to quantify the characteristics of the aircraft and mission systems. However, since our last report, the Navy received funding to buy two additional HC-130Js for the Coast Guard. As a result, DHS officials stated that they may revisit the decision to not fully test the HC-130J. Officials at the Aviation Logistics Center stated that they are concerned that initial operational test and evaluation was never completed and that current operations essentially serve as an assessment of capability. The mission system, a C4ISR suite of components which is similar to the suite on the MPA, has had problems such as unplanned reboots. These officials stated that operational testing might have helped to identify these issues sooner. Also, since HC-130J spare parts have not been sufficient, these officials explained that the Coast Guard has “demissionized” two HC-130Js to provide spare parts for the remaining four HC-130Js. These two HC-130Js are now partially mission capable, meaning they cannot use the electronic suite of C4ISR equipment. Coast Guard acquisition officials told us that fiscal year 2011 funding for HC-130J spares should allow the Coast Guard to re-missionize these assets. As part of its role in program execution, the Coast Guard is gaining a better understanding of each asset’s cost, schedule, and technical risks, but not all of this information is transparent to Congress. The Coast Guard maintains two different quarterly reports to track information on its major acquisitions, including narrative and mitigation actions pertaining to risks, and Coast Guard officials told us that the same database is used to populate both reports. One is the Quarterly Project Report which is an internal acquisition report used by Coast Guard program managers. The other, known as the Quarterly Acquisition Report to Congress (QARC), was required by various appropriations laws to be submitted to the congressional appropriations committees and to rank on a relative scale the cost, schedule, and technical risks associated with each acquisition project. We found that this statutory requirement is no longer in effect. However, the Coast Guard and DHS continue to submit the QARC pursuant to direction in committee and conference reports and the Coast Guard’s Major Systems Acquisition Manual. These committee and conference reports generally reiterate an expectation that the Coast Guard submit the QARC by the 15th day of the fiscal quarter. We found that the Coast Guard’s fiscal year 2010 QARCs did not always include risks identified in the Quarterly Project Reports. The Coast Guard’s Major Systems Acquisition Manual states that the QARC incorporates the Quarterly Project Report for each major acquisition project. The Quarterly Project Report includes, among other things, the top three project risks. In comparing both sets of reports—the Quarterly Project Report and the QARC—from fiscal year 2010, we found that over 50 percent of medium and high risks identified in the internal Quarterly Project Reports were not included in the QARC. For example, the Coast Guard reported to Congress that the OPC program had no risks in fiscal year 2010, but several were identified in the internal report—including concerns about affordability. In addition, for all of fiscal year 2010, the Coast Guard reported no risks for the MPA project in the QARC even though several were identified in the internal report. Before transmittal to Congress, the QARCs are reviewed by officials within the Coast Guard’s resource directorate, the DHS Chief Financial Officer’s office, and the Office of Management and Budget. Resource directorate officials told us they do not include risks in the QARC if those risks contradict the Coast Guard’s current budget request. For example, the resource directorate did not include the risk related to spare parts for the MPA in the fiscal year 2010 reports to Congress because the Coast Guard did not request funding for spare parts. DHS officials told us that they do not remove medium and high risks from the report. Office of Management and Budget officials stated that they will discuss several items with the Coast Guard, including factors that the agency may want to consider with regard to the medium and high risks identified in their draft submissions, but that the Office of Management and Budget does not direct the Coast Guard to remove medium or high risks from the reports before they are transmitted. We could not obtain documentation to determine at what point in the review process the decision is made to not include risks. For all four quarters of fiscal year 2010, the QARC was submitted consistently late. And as of May 2011, the Coast Guard had not submitted the first quarter fiscal year 2011 report to Congress—a delay of at least 4 months—but the second quarter fiscal year 2011 internal report was already complete. According to senior Coast Guard acquisition directorate officials, the QARC is intended to be the program manager’s communication with Congress about risks. However, when risks are not included, the Coast Guard is not presenting to Congress a complete and timely picture of the risks some assets face. To support its role as systems integrator, the Coast Guard planned to complete a fleet mix analysis in July 2009 to eliminate uncertainty surrounding future mission performance and to produce a baseline for the Deepwater acquisition. We previously reported that the Coast Guard expected this analysis to serve as one tool, among many, in making future capability requirements determinations, including future fleet mix decisions. The analysis, which began in October 2008 and concluded in December 2009, is termed fleet mix analysis phase 1. Officials from the Coast Guard’s capabilities directorate comprised the majority of the project team for the analysis, which also included contractor support to assist with the analysis. As of May 2011, DHS had not yet released phase 1 to Congress. We received the results of the analysis in December 2010. To conduct the fleet mix analysis, the Coast Guard assessed asset capabilities and mission demands in an unconstrained fiscal environment to identify a fleet mix—referred to as the “objective fleet mix”—that would meet long-term strategic goals. The objective fleet mix resulted in a fleet that would double the quantity of assets in the program of record, the $24.2 billion baseline. For example, the objective fleet mix included 66 cutters beyond the program of record. Given the significant increase in the number of assets needed for this objective fleet mix, the Coast Guard developed, based on risk metrics, incremental fleet mixes to bridge the objective fleet mix and the program of record. Table 5 shows the quantities of assets for each incremental mix, according to the Coast Guard’s analysis. While the analysis provided insight on the performance of fleets larger than the program of record, the analysis was not cost-constrained. The Coast Guard estimated the total acquisition costs associated with the objective fleet mix could be as much as $65 billion—about $40 billion higher than the approved $24.2 billion baseline. As a result, as we reported last year, Coast Guard officials stated that they do not consider the results to be feasible due to cost and do not plan to use it to provide recommendations on a baseline for fleet mix decisions. Since we last reported, Coast Guard officials stated that phase 1 supports continuing to pursue the program of record. Because the first phase of the fleet mix analysis was not cost constrained, it does not address our July 2010 recommendation that the Coast Guard present to Congress a comprehensive review of the Deepwater Program that clarifies the overall cost, schedule, quantities, and mix of assets required to meet mission needs, including trade-offs in light of fiscal constraints given that the currently approved Deepwater Program is no longer feasible. The Coast Guard has undertaken what it refers to as a cost-constrained analysis, termed fleet mix analysis phase 2; however, according to the capabilities directorate officials responsible for the analysis, the study primarily assesses the rate at which the Coast Guard could acquire the Deepwater program of record within a high ($1.7 billion) and low ($1.2 billion) bound of annual acquisition cost constraints. These officials stated that this analysis will not reassess whether the current program of record is the appropriate mix of assets to pursue and will not assess any mixes smaller than the current program. Alternative fleet mixes are being assessed, but only to purchase additional assets after the program of record is acquired, if funding remains within the yearly cost constraints. The Coast Guard expects to complete its phase 2 analysis in the summer of 2011. As we reported in April 2011, because phase 2 will not assess options lower than the program of record, it will not prepare the Coast Guard to make the trade-offs that will likely be needed in the current fiscal climate. Further, despite Coast Guard statements that phase 2 was cost constrained, there is no documented methodology for establishing the constraints that were used in the analysis, and we found confusion about their genesis. The acquisition directorate, according to the study’s charter, was to provide annual funding amounts, but Coast Guard officials responsible for phase 2 told us that DHS’s Program Analysis & Evaluation office provided the lower bound and the acquisitions directorate provided the upper bound. An official from the Program Analysis & Evaluation office stated that DHS informally suggested using historical funding levels of $1.2 billion to establish an average annual rate but was unaware that the Coast Guard was using this number as the lower bound for the study. A senior Coast Guard acquisition directorate official stated that the directorate agreed with using the $1.2 billion as the lower constraint and had verbally suggested the upper bound of $1.7 billion. Based on our review of historical budget data, $1.7 billion for Deepwater is more than Congress has appropriated for the entire Coast Guard’s acquisition portfolio since 2007 and as such, is not likely a realistic constraint. Coast Guard officials stated that the upper bound was not necessarily a realistic level, rather an absolute upper bound to establish the range of possible acquisition levels. In addition, the Coast Guard does not have documentation of the cost constraints; according to a Coast Guard official, these cost constraints were verbally communicated to the contractor. In addition to the Coast Guard’s analysis, DHS’s Program Analysis & Evaluation office is conducting a study, at the request of the Office of Management and Budget, to gain insight into alternatives to the Deepwater surface program of record. Office of Management and Budget officials told us that they recommended DHS conduct this study because DHS was in a position to provide an objective evaluation of the program and could ensure that the analysis of the trade-offs of requirements in a cost constrained environment would align with the Department’s investment priorities. A DHS official involved in the study stated that the analysis will examine performance trade-offs between the NSC, OPC, a modernized 270’ cutter, and the Navy’s Littoral Combat Ship. The official also explained that the analysis is based on a current estimate of surface asset acquisition costs, which serves as a cap to guide surface asset trade-offs. This cutter study is expected to be completed in the summer of 2011. This official also stated that the cutter study is not expected to contain recommendations, but Office of Management and Budget officials told us they plan to use the results to inform decisions about the fiscal year 2013 budget. A DHS official responsible for this study stated that this analysis and the Coast Guard’s fleet mix analysis will provide multiple data points for considering potential changes to the program of record, including reductions in the quantities planned for some of the surface assets. However, as noted above, Coast Guard capabilities directorate officials have no intention of examining fleet mixes smaller than the current, planned Deepwater program. Over the past 4 years, the Coast Guard has strengthened its acquisition management capabilities in its role as lead systems integrator and decision maker for Deepwater acquisitions. Now, the Coast Guard needs to take broader actions to address the cost growth, schedule delays, and expected changes to planned capabilities that have made the Deepwater program, as presented to Congress, unachievable. Today’s climate of rapidly building fiscal pressures underscores the importance of assessing priorities—from an acquisition, resource, and capabilities perspective—so that more realistic planned budgets can be submitted to Congress. Such a step would help alleviate what has become a pattern of churn in revising program baselines when planned funding does not materialize. At the same time, decision makers in the Coast Guard and Congress need accurate and timely information. Cost and schedule estimates that are not based on current and comprehensive information do not provide an effective basis for assessing the status of acquisition programs. Further, the quarterly acquisition reports to Congress were intended by the Coast Guard to convey program risks, but the Coast Guard is not consistent in reporting cost, schedule, and technical risks and has not submitted the reports in the time frame requested by Congress. From a broader perspective, it is unclear how, or whether, DHS and the Coast Guard will reconcile and use the multiple studies—the DHS cutter study and the Coast Guard’s two fleet mix analyses—to make trade-off decisions regarding the program of record that balance effectiveness with affordability. Without a process for doing so, decisions may not adequately balance mission needs and affordability. At the individual project level, knowledge-based decisions are needed as Deepwater enters its fourth year with the Coast Guard as systems integrator. Uncertainties about the C4ISR systems, which were intended to be the key to making Deepwater a system of systems, continue and are compounded as assets are designed and delivered without a coherent vision for the overall program. This includes the MPA mission system pallet, which is needed to carry out the full range of this aircraft’s planned missions but for which the Coast Guard has not developed an acquisition strategy. Because DHS approved the requirements document for the OPC despite significant unknowns concerning feasibility, capability, and affordability, future decisions about this asset must be based on a more rigorous knowledge base. And the ambiguities about cutter small boat quantities suggest that this program’s risk level may need to be reassessed. To provide Congress with information needed to make decisions on budgets and the number of assets required to meet mission needs within realistic fiscal constraints, we recommend that the Secretary of Homeland Security develop a working group that includes participation from DHS and the Coast Guard’s capabilities, resources, and acquisition directorates to review the results of multiple studies—including fleet mix analysis phases 1 and 2 and DHS’s cutter study—to identify cost, capability, and quantity trade-offs that would produce a program that fits within expected budget parameters. DHS should provide a report to Congress on the findings of the study group’s review in advance of the fiscal year 2013 budget submission. To help the Coast Guard address the churn in the acquisition project budgeting process and help ensure that projects receive and can plan to a more predictable funding stream, we recommend that the Commandant of the Coast Guard take the following two actions: Implement GAO’s Cost Estimating and Assessment Guide’s best practices for cost estimates and schedules as required by the Major Systems Acquisition Manual, with particular attention to maintaining current cost estimates and ensuring contractor’s schedules also meet these best practices.  As acquisition program baselines are updated, adopt action items consistent with those in the Blueprint related to managing projects within resource constraints as a Coast Guard-wide goal, with input from all directorates. These action items should include milestone dates as well as assignment of key responsibilities, tracking of specific actions, and a mechanism to hold the appropriate directorates responsible for outcomes, with periodic reporting to the Vice- Commandant. To help ensure that Congress receives timely and complete information about the Coast Guard’s major acquisition projects, we recommend that the Commandant of the Coast Guard and the Secretary of the Department of Homeland Security: include in the project risk sections of the Quarterly Acquisition Report to Congress the top risks for each Coast Guard major acquisition, including those that may have future budget implications such as spare parts; and submit the Quarterly Acquisition Report to Congress by the 15th day of the start of each fiscal quarter. Because DHS approved the OPC operational requirements document although significant uncertainties about the program’s feasibility, capability, and affordability remained, we recommend that the Secretary of DHS take the following two actions:  ensure that all subsequent Coast Guard decisions regarding feasibility, capability, and affordability of the OPC’s design are thoroughly reviewed by DHS in advance of the program’s next acquisition decision event (ADE 2A/B); and  determine whether a revised operational requirements document is needed before the program’s next acquisition decision event (ADE 2A/B). To increase confidence that the assets bought will meet mission needs, we recommend that the Commandant of the Coast Guard take the following three actions:  As the Coast Guard reevaluates and revises its C4ISR project documentation—including the operational requirements document, acquisition program baseline, and life-cycle cost estimate—determine whether the system-of-systems concept for C4ISR is still the planned vision for the program. If not, ensure that the new vision is comprehensively detailed in the project documentation.  Develop and finalize a strategy for the acquisition of the MPA mission system pallets before a full-rate production decision is made.  Specify the quantities of cutter small boats that the Coast Guard plans to purchase, given that the current project plan does not clearly do so, and categorize the appropriate acquisition level in accordance with a life-cycle cost that reflects these planned quantities. To help ensure that it receives timely and complete information about the Coast Guard’s major acquisition projects, Congress should consider enacting a permanent statutory provision that requires the Coast Guard to submit a quarterly report within 15 days of the start of each fiscal quarter on all major Coast Guard acquisition projects and require the report to rank for each project the top five risks and, if the Coast Guard determines that there are no risks for a given project, to state that the project has no risks. In addition, Congress should consider restricting the availability of the Coast Guard’s Acquisition, Construction and Improvements appropriation after the 15th day of any quarter of any fiscal year until the report is submitted. DHS provided us with written comments on a draft of this report. In its comments, DHS concurred with all of the recommendations. The written comments are reprinted in appendix II. We also provided draft sections of the report to Office of Management and Budget officials, who provided us technical comments via e-mail; we incorporated their comments as appropriate. With respect to our first recommendation, that DHS form a working group to review the results of the fleet mix and cutter studies and report to Congress in advance of the fiscal year 2013 budget, DHS agreed to initiate the review and analysis of the studies and report to Congress on the findings. However, DHS added that given available resources, competing priorities and demands, and the Office of Management and Budget’s timeline for fiscal year 2013 budget submission, this will occur as soon as reasonably practical. We understand that department officials have multiple demands on their time, but we believe that DHS should make every effort to report to Congress on the findings of this review before submitting its next budget. The Deepwater assets account for billions in acquisition dollars, and Office of Management and Budget officials told us that they plan to use the results of the DHS cutter study to inform the fiscal year 2013 budget. The working group’s findings could provide the Congress with important insights into costs, capabilities, and quantity trade-offs prior to receiving the department’s budget request. In concurring with our recommendation to implement GAO’s Cost Estimating and Assessment Guide’s best practices for cost estimates and schedules as required by the Major Systems Acquisition Manual, the Coast Guard noted that implementing some of these best practices may not always be cost effective in a production environment. However, the Coast Guard agreed to establish an appropriate cost estimate update frequency for each project and review Integrated Master Schedules and make schedule adjustments as needed. Sustained attention to the Cost Estimating and Assessment Guide’s practices will be very important, particularly as one of the largest acquisitions of the Deepwater program— the OPC—is expected to proceed to ADE-2A/B in fall 2011. DHS also agreed with our recommendation that as the Coast Guard updates its acquisition program baselines, these baselines must conform to known resource constraints. However, in responding to this recommendation, DHS and the Coast Guard did not address plans for developing action items to manage projects within resource constraints as a Coast Guard-wide goal, citing instead the existing senior-level resource governance process and annual budget process. We recognize that part of the standard budget development process includes trade-off decisions regarding recapitalization versus operation and maintenance funding. However, under this standard process, DHS and the Coast Guard have continued to face the problem of approved acquisition programs not being feasible. We also recognize that the Blueprint for Continuous Improvement is an acquisition directorate document that does not reflect resource priorities across the entire budget. However, this key document is signed by the Commandant, and the October 2010 version does include several budget-related action items, such as establishing project priorities. Our recommendation, to adopt action items “consistent” with those in the Blueprint regarding managing projects within resource constraints—with input from all directorates—reflects our belief that the Coast Guard needs to be more proactive in addressing its mismatch of expected funding levels and actual funding needs for approved acquisition programs. With respect to our recommendations concerning the comprehensiveness and timeliness of the Coast Guard’s quarterly acquisition reports to Congress, DHS agreed to report the top risks for each major acquisition and to submit the reports to Congress by the 15th day of the start of each fiscal quarter. However, DHS stated that OMB policy limits the Coast Guard’s ability to report project risks that are pre-decisional or address out-year funding plans. We made this recommendation because no risks had been included in the quarterly reports to Congress for two programs in fiscal year 2010. DHS also noted that the it strives to submit the reports on time, but that this is difficult, especially given the time required to coordinate its release outside of the department. We believe that when risks are not included and the reports are not transmitted in a timely manner, Congress will not have a complete and timely picture of the risks some assets face. DHS agreed to thoroughly review all subsequent Coast Guard decisions regarding feasibility, capability, and affordability of the OPC’s design in advance of the program’s ADE 2A/B. DHS also agreed with our recommendation to determine whether a revised operational requirements document is needed before ADE 2A/B. In its response, DHS stated that an independent validation study, directed by the Deputy Secretary as part of the approval of the OPC operational requirements document, found that the key parameters of range, speed, and sea- keeping were reasonable, accurate, and adequately documented. We have not yet reviewed this study. DHS also agreed with our three recommendations to increase confidence that the assets bought will meet mission needs. With respect to C4ISR, DHS stated that the Coast Guard remains committed to the system-of- systems concept and plans to provide DHS with an affordable and executable C4ISR acquisition program baseline that leverages work already completed. With respect to the mission system pallet, DHS stated that the Coast Guard plans to present a revised mission system pallet acquisition strategy to the DHS Acquisition Review Board for the full-rate production decision planned for the fourth quarter fiscal year 2012. This will follow initial operational test and evaluation of the current configuration of both the Maritime Patrol Aircraft and the mission system pallet. Finally, DHS stated that the Coast Guard will work with the department to determine the appropriate acquisition level for the small boats project. DHS also noted that the current approved project plan is for 27 small boats which have a life-cycle cost estimate that categorizes the project as a non major acquisition. The response, however, did not address the fact that the approved project plan recognizes the potential to buy up to 101 small boats. We maintain that, moving forward, the Coast Guard needs to specify the quantities of small boats it plans to purchase to ensure that the project’s acquisition level is appropriately categorized. Coast Guard also provided technical comments which we incorporated into the report as appropriate, such as when we were provided with documentation to support the comments. The Coast Guard requested that we remove the term “Deepwater” and replace it with “major acquisitions.” We did not make this change because, at the time of this report, Congress had not yet passed the fiscal year 2012 appropriations act which may address DHS’s and Coast Guard’s proposal to eliminate the term “Integrated Deepwater System” from its annual appropriation. Furthermore, the program baseline for one of the Coast Guard’s largest major acquisitions—the OPC—still remains part of the 2007 Deepwater acquisition program baseline. We are sending copies of this report to interested congressional committees, the Secretary of Homeland Security, and the Commandant of the Coast Guard. This report will also be available at no charge on GAO’s web site at http://www.gao.gov. If you or your staff have any questions about this report or need additional information, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff acknowledgments are provided in appendix VI. In conducting this review, we relied in part on the information and analysis in our past work, including reports completed in 2008, 2009, 2010, and 2011. Additional scope and methodology information on each objective of this report follows. To determine the extent to which the Deepwater Program’s planned cost and schedule baselines have been exceeded, we reviewed the Deepwater Program’s 2007 baseline and compared it to the revised baselines for individual assets that have been approved to date. We also reviewed budget documents and compared them against revised program baselines to identify any differences in reported cost and schedule estimates. To assess cost estimating and scheduling practices of selected Deepwater Programs, we selected the Maritime Patrol Aircraft (MPA), National Security Cutter (NSC), and command, control, communications, computers, intelligence, surveillance, and reconnaissance (C4ISR) programs. We reviewed the MPA program’s cost estimate and schedule because this program has the highest life-cycle cost estimate of all Deepwater assets and has experienced schedule delays. We also reviewed the NSC program’s schedule because the program has the second highest life-cycle cost estimate and has also experienced schedule delays. The Coast Guard was not able to provide us with a current NSC life-cycle cost estimate because the program is revising the estimate. As a result, we selected the C4ISR life-cycle cost estimate to review because the estimate was complete but the program did not yet have a Department of Homeland Security-approved acquisition program baseline and there was uncertainty concerning the direction of the program. In performing our analysis, we focused on the schedules and cost estimates available at the time of our review and evaluated them using the criteria set forth in GAO’s cost guide. In assessing the program’s cost estimates, we used the GAO cost guide to evaluate the estimating methodologies, assumptions, and results to determine whether the life-cycle cost estimates were comprehensive, accurate, well-documented, and credible. We also used the GAO guide to determine the extent to which each schedule was prepared in accordance with the best practices that are fundamental to having a reliable schedule. We discussed the results of our assessments with the program offices and cost estimators. We supplemented these analyses by interviewing Coast Guard officials from the capabilities, acquisition, and resources directorates to determine any challenges the Coast Guard is facing in achieving these baselines as well as some of the potential implications of schedule and cost breaches. Further, we analyzed five capital investment plans that were included in the 2008 through 2012 budgets, breach memos, and the acquisition directorate’s October 2010 Blueprint for Continuous Improvement to identify any funding issues and the extent to which they were factors leading to breaches in established program baselines. We also interviewed Coast Guard program staff and DHS officials from the Cost Analysis Division and Acquisition Program Management Division to corroborate program information. To determine the progression of the execution, design, and testing of Deepwater assets, we reviewed the following documents: Coast Guard’s Major Systems Acquisition Manual, asset operational requirements documents, acquisition strategies and plans, acquisition program baselines, program briefings to the Coast Guard’s Executive Oversight Council and associated meeting minutes, acquisition decision memorandums, test reports, and contracts. We also reviewed Quarterly Project Reports, and Quarterly Acquisition Reports to Congress, and various appropriations laws and related committee and conference reports regarding the reports to Congress. For fiscal year 2010, we compared the program risks identified in the Quarterly Project Reports to the risks identified in the Quarterly Acquisition Reports to Congress. We also reviewed the dates the fiscal year 2010 Quarterly Acquisition Reports to Congress had been transmitted to Congress. We interviewed officials responsible for collecting and reviewing information for these reports including officials from the Coast Guard’s acquisition and resources directorates, DHS’s Chief Financial Officer’s office, and the Office of Management and Budget. For design and testing, we also interviewed Coast Guard officials from the capabilities, resources, and acquisition directorates as well as the Navy’s Commander Operational Test and Evaluation Force and DHS’s Science and Technology Test & Evaluation and Standards Division. In addition, we met with Coast Guard operators at the Aviation Training Center in Mobile, Alabama, and Coast Guard officials at the Aviation Logistics Center. In addition, we also met with contractor and Coast Guard officials at Northrop Grumman Shipbuilding facilities in Pascagoula, Mississippi to discuss NSC construction and with a Bollinger Shipyards’ official in Lockport, Louisiana to discuss Fast Response Cutter construction and toured their respective shipyards. We also met with Coast Guard officials at Lockheed Martin facilities in Moorestown, New Jersey; and the Command, Control, and Communications Center in Portsmouth, Virginia to discuss their role in the C4ISR project. To assess the current status of the Coast Guard’s fleet mix analysis and determine how the Coast Guard and DHS are using the analysis to inform acquisition decisions, we reviewed key documents including charters and statement of works for the two fleet mix analysis phases. We also reviewed the December 2009 final report for the fleet mix analysis phase 1. We interviewed Coast Guard officials from the capabilities, resources, and acquisition directorates and Coast Guard officials overseeing work for phase 1 and phase 2. Additionally, we interviewed a senior DHS official from the office of Program Analysis and Evaluation and Office of Management and Budget officials to identify the scope of the Office of Management and Budget-directed cutter study and to understand similarities and differences between that study and the Coast Guard’s fleet mix analysis. We conducted this performance audit between September 2010 and July 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix provides the results of our analysis of the extent to which the processes and methodologies used to develop and maintain the MPA and C4ISR cost estimates meet the characteristics of high-quality cost estimates. The four characteristics of high-quality estimates are explained and mapped to the 12 steps of such estimates in table 6. Tables 7 and 8 provide the detailed results of our analysis of the MPA and C4ISR program cost estimates. “Not met” means the Coast Guard provided no evidence that satisfies any of the criterion. “Minimally” means the Coast Guard provided evidence that satisfies a small portion of the criterion. “Partially” means the Coast Guard provided evidence that satisfies about half of the criterion. “Substantially” means the Coast Guard provided evidence that satisfies a large portion of the criterion. “Fully met” means the Coast Guard provided evidence that completely satisfies the criterion. Tables 9 and 10 provide the results of our analysis of the extent to which the processes and methodologies used to develop and maintain schedules for the Maritime Patrol Aircrafts 12-14 and NSC 3 meet the nine best practices associated with effective schedule estimating. “Not met” means the program provided no evidence that satisfies any of the criterion. “Minimally” means the Coast Guard provided evidence that satisfies a small portion of the criterion. “Partially” means the Coast Guard provided evidence that satisfies about half of the criterion. “Substantially” means the Coast Guard provided evidence that satisfies a large portion of the criterion. “Fully met” means the Coast Guard provided evidence that completely satisfies the criterion. V: Allocation of Deepwater cquisition, Construction, and Improvement Appendix A Dol an Year Dollarslars in the Fiscal Years 2008, 2009, 2010, d 2011 Capital Investment Plans (Then- ) For further information about this report, please contact John P. Hutton, Director, Acquisition and Sourcing Management, at (202) 512-4841 or [email protected]. Other individuals making key contributions to this report include Michele Mackin, Assistant Director; Molly Traci; William Carrigg; Tisha Derricotte; Jennifer Echard; Laurier Fish; Carlos Gomez; Kristine Hassinger; Jason Lee; Karen Richey; and Rebecca Wilson. Coast Guard: Opportunities Exist to Further Improve Acquisition Management Capabilities. GAO-11-480. Washington, D.C.: April 13, 2011. Coast Guard: Observations on Acquisition Management and Efforts to Reassess the Deepwater Program. GAO-11-535T. Washington, D.C.: April 13, 2011. Coast Guard: Deepwater Requirements, Quantities, and Cost Require Revalidation to Reflect Knowledge Gained. GAO-10-790. Washington, D.C.: July 27, 2010. Department of Homeland Security: Assessments of Selected Complex Acquisitions. GAO-10-588SP. Washington, D.C.: June 30, 2010. Coast Guard: Observations on the Requested Fiscal Year 2011 Budget, Past Performance, and Current Challenges. GAO-10-411T. Washington, D.C.: February 25, 2010. Coast Guard: Better Logistics Planning Needed to Aid Operational Decisions Related to the Deployment of the National Security Cutter and Its Support Assets. GAO-09-497. Washington, D.C.: July 17, 2009. Coast Guard: As Deepwater Systems Integrator, Coast Guard Is Reassessing Costs and Capabilities but Lags in Applying Its Disciplined Acquisition Approach. GAO-09-682. Washington, D.C.: July 14, 2009. Coast Guard: Observations on Changes to Management and Oversight of the Deepwater Program. GAO-09-462T. Washington, D.C.: March 24, 2009. Coast Guard: Change in Course Improves Deepwater Management and Oversight, but Outcome Still Uncertain. GAO-08-745. Washington, D.C.: June 24, 2008. Status of Selected Assets of the Coast Guard’s Deepwater Program. GAO-08-270R. Washington, D.C.: March 11, 2008. Coast Guard: Status of Efforts to Improve Deepwater Program Management and Address Operational Challenges. GAO-07-575T. Washington, D.C.: March 8, 2007. Coast Guard: Status of Deepwater Fast Response Cutter Design GAO-06-764. Washington, D.C.: June 23, 2006. Efforts. Coast Guard: Changes to Deepwater Plan Appear Sound, and Program Management Has Improved, but Continued Monitoring Is Warranted. GAO-06-546. Washington, D.C.: April 28, 2006. Coast Guard: Progress Being Made on Addressing Deepwater Legacy Asset Condition Issues and Program Management, but Acquisition Challenges Remain. GAO-05-757. Washington, D.C.: July 22, 2005. Coast Guard: Preliminary Observations on the Condition of Deepwater Legacy Assets and Acquisition Management Challenges. GAO-05-651T Washington, D.C.: June 21, 2005. . Coast Guard: Deepwater Program Acquisition Schedule Update Needed. GAO-04-695. Washington, D.C.: June 14, 2004. Contract Management: Coast Guard’s Deepwater Program Needs Increased Attention to Management and Contractor Oversight. GAO-04-380. Washington, D.C.: March 9, 2004. Coast Guard: Actions Needed to Mitigate Deepwater Project Risks. GAO-01-659T. Washington, D.C.: May 3, 2001.
The Deepwater Program includes efforts to build or modernize ships and aircraft, including supporting capabilities. In 2007, the Coast Guard took over the systems integrator role from Integrated Coast Guard Systems (ICGS) and established a $24.2 billion program baseline which included schedule and performance parameters. Last year, GAO reported that Deepwater had exceeded cost and schedule parameters, and recommended a comprehensive study to assess the mix of assets needed in a cost-constrained environment given the approved baseline was no longer feasible. GAO assessed the (1) extent to which the program is exceeding the 2007 baseline and credibility of selected cost estimates and schedules; (2) execution, design, and testing of assets; and (3) Coast Guard's efforts to conduct a fleet mix analysis. GAO reviewed key Coast Guard documents and applied criteria from GAO's cost guide. The Deepwater Program continues to exceed the cost and schedule baselines approved by DHS in 2007, but several factors continue to preclude a solid understanding of the program's true cost and schedule. The Coast Guard has developed baselines for some assets that indicate the estimated total acquisition cost could be as much as $29.3 billion, or about $5 billion over the $24.2 billion baseline. But additional cost growth is looming because the Coast Guard has yet to develop revised baselines for all assets, including the OPC--the largest cost driver in the program. In addition, the Coast Guard's most recent capital investment plan indicates further cost and schedule changes not yet reflected in the asset baselines, contributing to the approved 2007 baseline no longer being achievable. The reliability of the cost estimates and schedules for selected assets is also undermined because the Coast Guard did not follow key best practices for developing these estimates. Coast Guard and DHS officials agree that the annual funding needed to support all approved Deepwater baselines exceeds current and expected funding levels, which affects some programs' approved schedules. The Coast Guard's acquisition directorate has developed action items to help address this mismatch by prioritizing acquisition program needs, but these action items have not been adopted across the Coast Guard. The Coast Guard continues to strengthen its acquisition management capabilities, but is faced with several near-term decisions to help ensure that assets still in design will meet mission needs. For example, whether or not the planned system-of-systems design is achievable will largely depend upon remaining decisions regarding the design of the command and control system. Important decisions related to the affordability, feasibility, and capability of the OPC also remain. For those assets under construction and operational, preliminary tests have yielded mixed results and identified concerns, such as design issues, to be addressed prior to initial operational test and evaluation. The Coast Guard is gaining a better understanding of cost, schedule, and technical risks, but does not always fully convey these risks in reports to Congress. As lead systems integrator, the Coast Guard planned to complete a fleet mix analysis to eliminate uncertainty surrounding future mission performance and produce a baseline for Deepwater. This analysis, which the Coast Guard began in 2008, considered the current program to be the "floor" for asset capabilities and quantities and did not impose cost constraints on the various fleet mixes. Consequently, the results will not be used for trade-off decisions. The Coast Guard has now begun a second analysis, expected for completion this summer, which includes an upper cost constraint of $1.7 billion annually--more than Congress has appropriated for the entire Coast Guard acquisition portfolio in recent years. DHS is also conducting a study to gain insight into alternatives that may include options that are lower than the program of record for surface assets. A DHS official stated that this analysis and the Coast Guard's fleet mix analysis will provide multiple data points for considering potential changes to the program of record, but Coast Guard officials stated they have no intention of examining fleet mixes smaller than the current, planned Deepwater program. GAO is making recommendations to the Department of Homeland Security (DHS) that include identifying trade-offs to the planned Deepwater fleet and ensuring the Offshore Patrol Cutter (OPC) design is achievable and to the Coast Guard that include identifying priorities, incorporating cost and schedule best practices, increasing confidence that assets will meet mission needs, and reporting complete information on risks to Congress in a timely manner. DHS concurred with the recommendations.
Seeking to improve the health and nutrition education of American schoolchildren, USDA began the Team Nutrition initiative (commonly known as Team Nutrition) in fiscal year 1995 by seeking and obtaining $20.3 million in funding. The Congress made another $10.5 million available for Team Nutrition in fiscal year 1996 and $10 million for fiscal year 1997. Elementary and secondary schools can participate in the initiative—and become Team Nutrition schools—by agreeing to support the initiative’s mission and principles and by making a commitment to meet USDA’s dietary guidelines for Americans. Once a school joins the “team,” it can obtain nutrition education materials on healthy eating habits. As of August 16, 1996, over 14,000 schools spread across all 50 states, the District of Columbia, Puerto Rico, and the U.S. Virgin Islands had become Team Nutrition schools. The Secretary of Agriculture is encouraging the remaining 80,000 schools across the nation to become Team Nutrition schools so that they can also obtain the materials being developed under the initiative. USDA considers Team Nutrition to be a key vehicle for promoting one of its top priorities: integrating the latest nutrition knowledge into each of USDA’s food assistance programs. At Team Nutrition’s inception, the Under Secretary for Food, Nutrition and Consumer Services decided that Team Nutrition would be administered by USDA’s Food and Consumer Service (FCS). Until August 1996, when it was placed in FCS’ Special Nutrition Programs, Team Nutrition was managed by the Office of the FCS Administrator. The initiative is composed of two basic components: (1) training and technical assistance, which is managed by the Associate Administrator for Food and Consumer Services, and (2) nutrition education, which is managed by Team Nutrition’s Acting Project Manager. Much of Team Nutrition’s efforts are carried out through contracts and cooperative agreements. These contracts and agreements support both of Team Nutrition’s components. Each component received approximately half of the funds appropriated for Team Nutrition. We reviewed two contracts for support services, one with Global Exchange, Inc. (Global), and one with Prospect Associates, Ltd. (Prospect); a cooperative agreement with Buena Vista Pictures Distribution, Inc., a subsidiary of the Walt Disney Company (Disney), to use animated characters to promote healthy eating; and a grant to an author to write a children’s book about nutrition. Table 1 provides details on these activities. FCS obtains support services for Team Nutrition from Global and Prospect through task order contracts. A task order contract is used when the procuring agency knows the type, but not the precise quantities, of services that will be required during the contract period. These contracts permit flexibility in (1) scheduling tasks and (2) ordering services after the requirements materialize. The Federal Acquisition Regulation stipulates that task order contracts may be used when the agency anticipates a recurring need for the contractor’s services. FCS awarded the Global and Prospect contracts to provide (1) marketing and consumer research on how to best market nutrition education; (2) message development, design, and production services for multimedia nutrition education materials; and (3) ways to create and maintain partnerships with organizations concerned about nutrition education. The cooperative agreement with Disney allows FCS to use two of Disney’s popular animated characters in Team Nutrition’s media campaigns. In accordance with the agreement, Disney developed and distributed four animated public service announcements and additional nutrition education materials featuring Pumbaa and Timon, characters from its recent film, The Lion King. Finally, FCS awarded a $25,000 grant to an author to write a children’s book promoting good nutrition. Our review of two contracts, a cooperative agreement, and a grant under the Team Nutrition initiative revealed poor management and, in some cases, a violation of federal procurement law and ethics regulations. The problems we found with each of these efforts are discussed below. We found no irregularities in the manner in which FCS awarded the contract to Global. However, we believe that Team Nutrition officials acted improperly in assigning tasks under the Global contract that were beyond the contract’s scope of work. These officials also did not follow normal contracting procedures in dealing with subcontractors under the Global contract. Federal procurement law requires that an agency conduct a separate procurement when it wishes to acquire services that are beyond the scope of an existing contract. A matter is outside the scope of the original contract when it is materially different from the original purpose or nature of the contract. In our view, Team Nutrition officials assigned Global two tasks—tasks 9 and 10—under its contract that materially deviated from the original contract’s overall scope of work. Under its contract with FCS, Global was to provide support services to assist Team Nutrition in conducting a national nutrition education campaign, including the planning and development of educational materials and communication efforts related to nutrition. As we discussed in our May 1996 testimony, task 9 was to conduct focus group research to assess the reactions of the general public and food stamp recipients to USDA’s proposals to change the Food Stamp Program. We concluded that this work, which cost FCS about $33,000, was outside the scope of Global’s support services contract. Similarly, task 10—to evaluate the success of the San Francisco County Jail’s garden project and to develop a guidebook on the project to show other communities how to implement similar programs—has no substantive relationship to nutrition education or the dissemination of sound nutrition information. The garden project is a program to rehabilitate former prisoners by having them grow produce that is either donated to the needy or sold to restaurants. This evaluation, for which FCS has budgeted about $49,000, differs materially from the subject matter of the Global contract, which is to assist FCS in its efforts to provide “effective nutrition education” and to communicate “sound nutrition information.” Furthermore, contrary to normal contracting practices, Team Nutrition officials directed Global to hire specific subcontractors and did not give Global the opportunity to perform the work itself. Generally, once an agency awards a contract, the contractor is responsible for performing the work, either by using its own resources or by hiring a subcontractor. Team Nutrition officials negotiated directly with five firms to perform work for certain elements of its nutrition education campaign before the five firms signed subcontract agreements with Global. Representatives from three of these firms also met with the Under Secretary to discuss their work before any contractual arrangement had been made between these firms and Global. All five firms then started work for Team Nutrition without the knowledge of, or any signed agreements with, Global. These firms were later added as subcontractors to the Global contract. Because Team Nutrition officials directed Global to hire these firms, Global did not obtain competitive offers, nor did it conduct a cost-reasonableness analysis of their proposed budgets. After they signed subcontract agreements with Global, these subcontractors continued to be directed by Team Nutrition officials instead of Global. These officials often did not include Global in planning meetings with the subcontractors and did not provide the subcontractors with well-defined tasks that had specific deliverables. As a result, Global had little control over its subcontractors’ work and costs. Furthermore, Global and FCS officials told us that they did not understand what work one of the subcontractors had done to justify the $40,000 payment it had received. Only after the subcontractor had been paid did Global and FCS officials ask the subcontractor to document the tasks it had performed. As with Global, we found that FCS’ contract with Prospect was awarded in a fashion consistent with applicable procurement regulations. However, the history of the Prospect contract indicates a pattern of careless management. This careless management may have reduced the contract’s contributions to Team Nutrition. When the Prospect contract was awarded, Team Nutrition officials provided only minimal technical direction for the contract’s tasks. The Contracting Officer’s Representative (COR), who was not the Team Nutrition Project Manager, did not have a clear understanding of how Prospect was to support the Team Nutrition mission. Therefore, the COR did not provide the technical direction that Prospect needed to effectively perform several tasks. Moreover, without notifying the Contracting Officer, and without having the authority to do so, the COR allowed a number of unauthorized individuals to provide technical direction to Prospect and/or to change the scope of the work defined in at least two tasks. In one instance, the director of a USDA division unrelated to Team Nutrition directed Prospect to conduct focus group research worth about $78,000 without the Contracting Officer’s approval. In another instance, a Contracting Officer’s Technical Representative directed a significant change in a task’s scope of work without authorization. The Contracting Officer and the COR did not become aware of this directed change until Prospect submitted a revised cost proposal to increase the cost of the task by about $500,000. Furthermore, a change to one effort under the Prospect contract, while within the scope of the contract, involved work that was more complex than anticipated, given the statement of work and the projected budget in the contract’s task orders. Team Nutrition officials expanded a relatively basic $173,000 evaluation of the effectiveness of Team Nutrition to a more comprehensive $2.3 million effort. FCS contracting officials told us that while this work was within the scope of the contract, it would have been preferable for the agency to obtain this expanded work through a separate, competitive procurement. They believed that a separate procurement was preferable because of the magnitude of the change and the addition of work that required a higher degree of technical expertise than was originally specified. However, FCS contracting officials told us that, given Team Nutrition’s desire to move quickly in initiating the work, they did not have sufficient time to solicit and award a new competitive contract. We found no problem with the process FCS used to award the cooperative agreement to Disney. However, once again, we found weaknesses in FCS’ performance in managing this cooperative agreement. FCS entered into this agreement, which allows it to use two Disney characters from The Lion King to promote good nutrition, while these characters were also being used in advertisements and in-store promotions for a national fast food restaurant chain. To assess the impact of these characters on the Team Nutrition nutrition education campaign, FCS had Global conduct focus groups to determine what messages children were receiving from these characters. However, in conducting this evaluation, FCS did not test the possible messages children could receive from the fast food advertisements. Therefore, the information gathered from this research may be inconclusive. Furthermore, the Disney agreement, originally scheduled to expire on September 30, 1996, required Team Nutrition to return to Disney all materials that used the animated characters at the expiration of the agreement. These materials are included in the nutrition education kits that FCS is distributing to Team Nutrition schools. When we questioned the potential impact of this requirement on Team Nutrition’s goals, we discovered that Team Nutrition officials had not been attentive to the fact that the agreement was about to expire. They acknowledged our concerns, subsequently contacted Disney, and sought Disney’s consent to extend the agreement’s expiration date. On August 8, 1996, Team Nutrition officials told us that Disney had agreed to a 1-year extension; but as of September 16, 1996, no contract extension had been executed. Even with this extension, under the current terms of the agreement, FCS will be required to return the materials in September 1997. Since Team Nutrition officials had planned to distribute these materials to schools through February 1998, the requirement to return the Disney materials before that date may curtail some elements of the nutrition education campaign. We found that the process FCS followed in the award of a $25,000 sole-source grant to an author to write a children’s book on nutrition was consistent with departmental criteria. These criteria allow sole-source grant awards for amounts less than $75,000, and FCS contracting officials exercised their authority under these criteria. However, the Under Secretary for Food, Nutrition and Consumer Services, through her involvement in the administration of this grant, violated federal ethics regulations. These regulations prohibit employees from using public office for the private gain of their friends. Specifically, to ensure that an employee’s actions do not create the appearance of the use of public office for private gain, or of giving preferential treatment, these regulations require the employee whose official duties would affect the financial interests of a friend to comply with certain other regulations. These latter regulations prohibit an employee from participating in a specific matter likely to have a direct and predictable effect on the financial interests of the friend, unless that employee has informed the agency’s designated ethics official of the appearance problem and received authorization from that official to participate in the matter. The grantee and the Under Secretary have known one another for 15 years and are close personal friends. Despite this relationship, the Under Secretary did not inform USDA’s ethics officials about her friendship with the author, nor did she recuse herself from approving the grantee’s performance before payment was made to the author, or from other actions that would financially benefit the author. The Under Secretary maintained close personal involvement throughout the period of the grantee’s performance. For example, her staff regularly kept her informed of the discussions and developments between FCS and the author’s agent, and the Under Secretary provided comments to her staff on these matters. In addition, under the terms of the grant, the author was to receive interim payments based on her performance in writing the book. These interim payments depended upon the Department’s review and approval of the author’s manuscript. Our review showed that the Under Secretary was given the manuscript for her approval and that her Executive Assistant—although not the COR for this effort—personally conveyed the Department’s final approval to the author’s agent. Moreover, during the development of the manuscript, the Under Secretary met in person with the author at USDA to convey the Department’s comments on the manuscript. To date, FCS has paid the author $11,250. The final payment of $13,750 will be made, as specified by the terms of the grant, when the book is published. Furthermore, the author’s grant application explicitly stated that the author hoped and expected to earn “considerably more” through sales of the book. Thus, the publication of the book would provide income to the author in two ways: (1) the final payment under the grant and (2) the sales of the book. In this connection, at least as early as February 1994, the author’s agent raised the idea with the Under Secretary’s office that USDA would at some point purchase a significant quantity of the published books. During the period in which the manuscript was being developed, there were frequent and insistent communications from the author’s agent to USDA about the need for a purchase commitment from USDA for a large quantity of these books as part of the initial production run. The Under Secretary’s staff informed her several times about this issue. These developments culminated in October 1995, shortly after USDA gave final approval to the manuscript. The Team Nutrition Project Manager and the COR prepared a procurement request on October 2, 1995, for approximately 25,000 copies of the book, at a cost of approximately $50,000. However, the FCS Budget Division questioned the request because, in less than 1 year, FCS would be able to copy the books itself. When informed of these concerns, the Under Secretary replied, in writing, that “the Need in Schools is Now” and advised that “If justification is adequate, we proceed.” However, when told of the circumstances, the FCS Administrator directed that this procurement not go forward. To date, the book has not been published. In our August 1996 report, we identified a number of irregularities in the process used to hire the former Project Manager, set her salary, and collect financial disclosure statements from her and the former Assistant Project Manager. As we previously reported, FCS complied with the federal regulatory procedures for establishing, advertising, and considering applicants for the positions to which the Project Manager, Assistant Project Manager, and Project Coordinator were subsequently appointed. FCS judged each of these employees as qualified for the positions for which they applied, and the Office of Personnel Management certified that these applicants met the general standards for the positions for which they applied. However, our review of the former Project Manager’s employment application raised several concerns about her qualifications for the position she held. These concerns included the very short period of time she had spent in a previous job that FCS considered to be crucial experience in judging her qualifications, her apparent misrepresentation of her academic credentials, and her lack of answers to some questions on her application and her incomplete answers to others. Because FCS performed only a perfunctory review of the former Manager’s paperwork, it was unaware of the potential problems with her experience and her academic credentials. In addition, we found that FCS did not have an adequate basis for establishing the former Project Manager’s salary. FCS did not require her to submit documentation sufficient for it to assess her salary history, as required by USDA’s procedures. The former Project Manager may have overstated her prior salary by including in it the estimated value of pro bono consulting work, payments allegedly made to her husband, and projected earnings for several months in which she did not earn a salary. FCS was unaware of the former Manager’s apparent overstatement of her prior salary. As a result of her representation of her prior salary, FCS appointed her to a significantly higher pay level than might have otherwise been justified. Finally, although the former Project Manager and the former Assistant Project Manager were required to submit financial disclosure statements within the first 30 days of their employment at FCS, neither employee did so. The former Project Manager did not submit a statement until a year after it was due, and the statement covered only a small portion of the period in which she was employed at FCS. The former Assistant Manager submitted a completed form 5 months after being hired, but only after the threat of disciplinary action. USDA’s problems in managing its Team Nutrition procurement and personnel hiring practices can be attributed largely to the failure to follow the agency’s procedures and the lack of a strategic plan for the Team Nutrition initiative. From Team Nutrition’s inception, the Under Secretary has provided continual and specific direction of the initiative. The Under Secretary suggested the hiring of the former Project Manager and made decisions on procurements and a grant that demonstrated poor judgment and, in some cases, violated federal procurement law and ethics regulations. In addition, even though the initiative has been in effect and operating for nearly 2 years, there is no documented strategic plan to guide its operations. Without a strategic plan in place, FCS has had difficulty in determining how its contracts would be used to support Team Nutrition’s goals. The Under Secretary for Food, Nutrition and Consumer Services considers Team Nutrition to be an important initiative that requires her personal leadership. Therefore, from its inception, the Team Nutrition initiative did not operate within FCS’ existing program management structure. Instead, the Under Secretary placed the initiative within the Office of the FCS Administrator. According to the Under Secretary, she made this decision so that the new initiative would not be lost among the agency’s competing priorities and so that it could benefit from high-level support and attention. The Under Secretary required all Team Nutrition managers to take programmatic direction from her through meetings and weekly reports. She made specific recommendations about whom to hire and how funds should be spent. The agency’s normal internal controls and reporting and review processes were not followed for decisions on Team Nutrition. For example, contractors typically select their own subcontractors and monitor their subcontractors’ performance. This situation did not occur in the Global contract because the Under Secretary selected some subcontractors and, in some cases, directly managed their work. Consequently, Global had little control over these subcontractors’ work and costs. As we noted earlier, FCS and Global officials did not understand what work one subcontractor had done to justify his $40,000 payment. Team Nutrition officials were hampered in their efforts to manage the contracts, cooperative agreement, and grant because they had no documented strategic plan to guide these actions and measure their progress. Without a strategic plan, Team Nutrition officials had little understanding of the specific tasks that should be performed, the order in which these tasks should occur, and the way in which these tasks should be integrated to support Team Nutrition’s goals. For example, the COR told us that he was unable to provide Prospect with meaningful, substantive work because Team Nutrition had no documented strategic plan. With no strategic plan to guide their decision-making, Team Nutrition officials added tasks and funds to the Prospect contract in a haphazard fashion. For example, the Team Nutrition Project Manager decided to add six new tasks totaling $3 million to the contract 1 week before its expiration date for adding new work. She requested the work despite the fact that she had informed the FCS contracting officials 14 days earlier that no new work would be added to the contract. According to the FCS contracting officials, they had to rush to complete the modification before the expiration date for adding new work. This time pressure precluded any meaningful price negotiations with the contractor before work began. Similarly, under the Global contract, Team Nutrition officials directed Global to hire five subcontractors but did not clearly define the tasks these subcontractors were to perform, including the products that were to result from these tasks. This lack of clear instructions resulted in duplication of effort and uncertain contributions to the Team Nutrition mission. For example, duplication occurred when FCS asked Global to hire two different firms to develop plans for the June 1995 launch of Team Nutrition. These two subcontracts totaled about $50,000, but neither plan was ever used, according to an FCS official. FCS recognized that it had a number of problems with its procurement administration and personnel management and has begun improvement efforts. In June 1995, FCS took steps to improve its management of the Global and Prospect contracts. These steps included establishing new operational procedures and increasing reporting responsibilities. Nearly a year later, FCS formed a Contract Management Review Task Force that assessed FCS’ policies and procedures for contract management. The task force recommended changes to improve FCS’ contract management. On June 21, 1996, the FCS Administrator issued numerous directives resulting from the task force’s recommendations. Several of these directives recommend that the agency adhere to existing policies. New policies include training requirements for all staff involved with procurement and the establishment of an agency ombudsman for staff to contact about potential procurement improprieties. To sustain the Team Nutrition initiative, on July 26, 1996, the FCS Administrator recommended to the Under Secretary that, in the short term, Team Nutrition’s activities be placed in FCS’ existing programmatic structure—as part of Special Nutrition Programs. Until a new director for the Nutrition and Technical Services Division is appointed, the Deputy Administrator of Special Nutrition Programs will oversee the initiative’s day-to-day operations. She will report to the Associate Administrator for Food and Consumer Services, who will, in turn, report to the FCS Administrator. However, according to the Associate Administrator, although the Under Secretary approved this recommendation on August 8, 1996, the Under Secretary has continued to provide programmatic direction to Team Nutrition managers. With respect to personnel management, as we reported earlier, FCS plans to (1) tighten procedures for examining the qualifications of applicants for senior-level positions; (2) strengthen its procedures for obtaining and properly reviewing documentation submitted by applicants that is sufficient for making appointments at salaries above the minimum rate; and (3) intensify its efforts to collect financial disclosure statements by aggressively following through with disciplinary action if its requests are not successful. In addition, the Administrator told us that he has directed the Human Resources Division to conduct an internal review of its personnel practices and that the Under Secretary had directed the Regional Administrator for FCS’ Mid-Atlantic Region to conduct a similar review. The actions FCS has taken so far to address procurement and personnel problems are steps in the right direction. However, it is too soon to determine whether these actions are sufficient to correct the problems that we identified. In conclusion, we found that the Team Nutrition contracts, cooperative agreement, grant, and personnel management practices we examined demonstrate a pattern of poor management and, in some cases, violated federal procurement law and ethics regulations. The problems in the management of the Team Nutrition initiative can be attributed largely to the failure to follow the agency’s procedures and the lack of a strategic plan for the initiative. FCS has taken some actions to address its procurement and personnel problems. However, unless better management judgment is exercised and the agency’s procedures are adhered to, these problems are likely to persist. Mr. Chairman, this completes my prepared statement. I would be pleased to respond to any questions you or Members of the Subcommittee may have. Provide support for the Team Nutrition launch, including strategic counsel and management of the event. Provide strategic planning for Team Nutrition’s public relations campaign and for coordinating the entertainment industry’s participation in Team Nutrition. Provide research and development for the Team Nutrition launch, participation in strategic planning, development of press materials, and coordination of invitation mailing lists for the launch. Lake Research, Inc. Conduct focus group research to assess the reactions of the general public and food stamp recipients to the U.S. Department of Agriculture’s proposals to change the Food Stamp Program. Podesta Associates, Inc. Develop and execute the U.S. Department of Agriculture’s Great Nutrition Adventure, including strategic development, organization of national events, press relations, preparation of press materials, and follow-up contacts. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed the Department of Agriculture's (USDA) Team Nutrition contracts, cooperative agreement, and grant for multimedia nutrition education. GAO noted that: (1) Team Nutrition officials acted improperly in assigning tasks under the the Global contract and did not follow normal contracting procedures in dealing with Global's subcontractors; (2) Team Nutrition officials did not provide the technical direction that another contractor needed to perform several tasks; (3) Team Nutrition failed to determine the message that children were receiving from the Team Nutrition Initiative advertisement; (4) the Under Secretary for Food, Nutrition, and Consumer Services (FCS) violated federal ethics regulations by participating in the administration of this grant; (5) FCS improperly reviewed the former project manager's employment application, academic credentials, and financial disclosure statements; (6) USDA management problems resulted from USDA failure to follow agency procedures and its lack of a strategic plan for the Team Nutrition Initiative; and (7) FCS has taken steps to improve its procurement and personnel practices, including establishing new operational procedures, increasing reporting responsibilities, requiring procurement administration training, establishing an agency ombudsman to handle procurement improprieties, tightening procedures for the applications process, reviewing applicant documentation, and intensifying collection of employee financial disclosure statements.
The AQI program provides for inspections of imported agricultural goods, products, passenger baggage, and vehicles, including commercial aircraft, ships, trucks, and railcars, to prevent the introduction of harmful agricultural pests and diseases. CBP has responsibility for inspection activities at ports of entry, including reviewing passenger declarations and cargo manifests and targeting high-risk passengers and cargo shipments for agricultural inspection; inspecting international passengers, luggage, cargo, mail, and means of conveyance; and holding suspect cargo and articles for evaluation of plant and animal health risk in accordance with USDA regulations, policies, and guidelines. Inspection procedures vary somewhat depending on what pathway is being inspected (e.g., passengers, cargo, vessels, etc.) but, generally, CBP officers conduct a combined primary inspection for agriculture, customs, and immigration issues, and, as needed, make referrals to CBP agriculture specialists who conduct more detailed secondary inspections. APHIS has responsibility for other AQI program activities, including providing training; providing pest identification services at plant inspection stations and setting AQI user fee rates and administering the collected fees; setting inspection protocols; applying remedial measures other than destruction and re-exportation, such as fumigation, to commodities, conveyances, and passengers. APHIS lacks the authority to recover the full costs of the AQI program through fees. Section 2509(a) of the Food, Agriculture, Conservation, and Trade (FACT) Act of 1990 authorizes APHIS to set and collect user fees sufficient to cover the cost of providing and administering AQI services in connection with the arrival of commercial vessels, trucks, railcars, and aircraft, and international passengers. APHIS does not have the authority to charge AQI fees to pedestrians or military personnel and their vehicles, nor to recover the costs of these inspections through the fees assessed on others (see fig. 1). AQI fee collections are divided between CBP and APHIS. Gaps between AQI fee collections and program costs are generally covered by CBP using its Salaries and Expenses appropriation, which is authorized for necessary expenses related to agricultural inspections, among other activities. In fiscal year 2012, AQI fee revenues totaled approximately $548 million (see fig. 2). As authorized by the FACT Act, these funds remain available without fiscal year limitation and may be used for any AQI-related purpose without further appropriation. When funds are available until expended, agencies may carry forward unexpended collections to subsequent years and match fee collections to average program costs over more than 1 year. Such carryovers are one way agencies can establish reserve accounts, that is, revenue to sustain operations in the event of a sharp downturn in collections. APHIS uses some of the AQI fee collections in this way. We have previously reported that a reserve can be important when fees are expected to cover program costs and program costs do not necessarily decline with a drop in fee revenue. APHIS maintains two types of reserves. APHIS refers to the first reserve as the “shared reserve” because it is meant to cover both APHIS and CBP needs in the event that fee collections decline unexpectedly. The second reserve is an “APHIS-only” reserve, and is funded from APHIS’s portion of total AQI collections. The APHIS-only reserve is intended to provide APHIS with budgetary flexibility. Between the two reserves, APHIS aims to maintain a total reserve balance equal to 3 to 5 months of AQI program costs. As previously mentioned, in 2010, APHIS engaged a contractor to conduct a thorough review of AQI program costs and options for redesigning AQI fees. In addition, APHIS contracted for an economic analysis to ensure that the proposed fees would not have unintended consequences. In reviewing the AQI fees, the contractor identified the direct and indirect costs of the AQI program for both APHIS and CBP by pathway, to the extent the agencies captured these costs for fiscal year 2010. The contractor also conducted activity-based costing to serve as the basis for future fee setting. These practices are consistent with federal cost accounting standards. The contractor assumed the accuracy of the data provided from both APHIS and CBP. Our recent work reported that data quality is an ongoing issue with AQI data systems, including the Work Accomplishment Data System (WADS), one of the data sources used by the contractor. However, Office of Management and Budget Circular A- 25 states that when reviewing user fees, full cost should be determined or estimated using the best available records of the agency, and new cost accounting systems do not need to be established solely for the purpose of rate-setting. The contractor also solicited input from stakeholders as part of the fee review process, a practice consistent with our User Fee Design Guide. APHIS is using the AQI cost model developed by the contractor as well as the findings from the fee review to update the AQI fee schedule. According to APHIS officials, as of February 2013, APHIS and CBP are considering staff recommendations for a new fee structure, including new fee rates. Pending approval from both USDA and DHS, APHIS expects to publish a notice in the Federal Register with a proposed new fee schedule in the fall of 2013. As such, it is important to note that the current staff recommendations for AQI fees are subject to change and that the fee structure and rates APHIS establishes will be informed by many factors, including public comments through the rulemaking process. In fiscal year 2011—the most recent year for which data were available— AQI fee collections covered 62 percent of total identified AQI program costs, leaving a gap of more than $325 million between total AQI costs and total AQI collections. This gap was covered with funds from CBP’s Salaries and Expenses appropriation and by funds from other agencies to cover imputed costs. Although the AQI program is often referred to as a fully fee-funded program, it is not. Fees assessed on individual pathways are to be set commensurate with the costs of services with respect to a particular pathway. For passenger fees, the costs of services include the costs of related inspections of the vehicle. Once revenue is earned from one pathway, however, it may be spent on any AQI-related program cost. For example, revenue earned from commercial airline passenger inspections may be spent on private air passenger inspection activities. However, as shown in table 1, APHIS has chosen not to charge some classes of passengers, and the collections of the AQI program as a whole do not equal total identified program costs. Several other factors also compound the gap between AQI program costs and total AQI fee collections, as discussed below. Specifically, CBP’s AQI costs are understated, AQI fee rates do not reflect imputed costs, and CBP and APHIS do not fully recover the costs of AQI-related reimbursable overtime services. CBP does not capture all time spent on agriculture activities in its Cost Management Information System (CMIS)—the system in which CBP tracks its activities and determines personnel costs. Both to accurately set AQI fee rates to recover program costs and to allocate fee revenues between APHIS and CBP proportionate with each agency’s program costs, CBP must accurately track its expenses related to the AQI program. In 2005, CBP agreed to report its AQI-related expenses to APHIS quarterly. CBP officers’ and agriculture specialists’ time is generally charged to a mix of CMIS codes to represent the variety of activities they perform. Although this mix of codes will understandably vary, CBP guidance specifies that time spent by officers conducting primary inspections—which, as previously discussed, include aspects of agriculture, customs, and immigration inspections—is to be attributed to a mix of CMIS codes representing each of these three functions. We found, however, that at 31 ports and other locations, CBP did not charge any primary inspection time to agriculture-related CMIS codes for all or a portion of fiscal year 2012, which means that AQI costs at these ports are being understated. Further, CBP officers at ports we visited described different procedures for using CMIS codes and wide variation in the extent to which they verify that CMIS codes accurately capture work activities. Because CBP’s AQI costs are underreported by some unknown amount in CMIS, APHIS does not have complete information about CBP’s AQI-related costs and therefore is unable to consider total program costs when setting AQI fee rates. CBP headquarters oversees ports’ use of CMIS to track AQI expenses by providing guidance and training, and by annually reviewing CMIS data from about 50 of the highest-volume ports. In addition, CBP field offices review CMIS codes for ports in their jurisdiction on a quarterly basis. CBP headquarters also produces CMIS guidance, which includes a CMIS code dictionary and a notice that the time officers spend on primary inspection should be charged to customs, immigration, and agriculture codes. Instructions for reviewing the use of CMIS codes are also provided to ports. Although the instructions provide brief examples, they do not specify how ports should determine the appropriate mix of codes to use or the frequency with which ports should conduct work studies. At some locations we visited, CBP officials said that headquarters does not provide sufficient CMIS guidance to enable accurate and consistent reporting of staff activities. CBP headquarters officials told us that they provide semiannual training which is intended to ensure correct CMIS use at ports. However, attendance at these training sessions is not required and officials said there is high turnover among CMIS practitioners at the ports and field offices. The current AQI fee rates do not cover imputed AQI program costs. APHIS estimated that these costs were about $38 million in fiscal year 2011, the most recent year for which data were available. In 2008 we recommended that the Secretary of Agriculture include these costs when setting AQI fees consistent with federal accounting standards, OMB Circular No. A-25 guidance, and USDA policy. APHIS agreed with the recommendation and, as we will discuss more fully later on in this report, has included some, but not all, of these costs in its recent analysis of AQI costs. Because APHIS is authorized to set AQI fees to recover the full cost for each pathway, it is important that the agency accurately captures full program costs. The AQI program does not fully recover costs for reimbursable overtime agriculture inspection services in part because (1) the reimbursement rates paid by users are set by APHIS regulations and do not cover the agencies’ overtime costs, (2) CBP does not consistently charge for these services, and (3) when CBP does charge it does not timely collect payments for these services. CBP is authorized to charge for overtime for agriculture inspection and related services in some situations, known as reimbursable overtime. When a CBP officer or agriculture specialist performs an inspection service on a Sunday or holiday or while the employee performing the inspection is on overtime, CBP is to bill the user for the service. This can happen, for example, when an importer requests an inspection of agricultural produce outside of normal duty hours. Reimbursable overtime collection rates are not aligned with the agencies’ current staff costs, which means any reimbursable overtime collections do not fully cover costs to perform these services. APHIS has the authority to set reimbursable charges to recover the full costs of overtime services, but the reimbursement rates have not been adjusted since 2005. Under the APHIS regulations, CBP may charge $51 per hour for agriculture- related overtime Monday through Saturday and holidays, and $67 per hour on Sunday. When we asked CBP officials for their average annual costs for overtime agriculture inspections they told us that they have not calculated these costs. However, CBP was able to create such an analysis for us using August 2012 as an example. CBP estimated that its average salary cost for overtime agricultural inspections in August 2012 was approximately $85 per hour, and it billed approximately $55 per hour for those services. They further estimated that for that month, reimbursable agriculture overtime services cost the agency approximately $58,000, while the agency only billed approximately $37,000 for those services—or about 64 percent of the cost. APHIS’s rates for reimbursable agriculture overtime services are similarly misaligned with its costs. APHIS and CBP officials worked together to develop a draft proposed rule to update the overtime rates, but according to APHIS officials it has been on hold since summer 2011. CBP headquarters encourages ports to charge for reimbursable overtime services and provides guidance clarifying how they should do so. This practice is consistent with effective fee design principles; as we have previously reported, if a service primarily benefits identifiable users, users However, CBP personnel at some ports told should pay for that service. us they do not charge for reimbursable agriculture services provided because their port does not get to keep the reimbursable overtime funds. In addition, officials at three ports said it is administratively burdensome to process the reimbursable overtime forms. CBP does not ensure that reimbursable overtime is collected when charged. APHIS regulations require that agriculture-related reimbursable overtime be paid for in advance and that overtime services be denied to anyone whose account is more than 90 days delinquent.according to CBP data, as of August 31, 2012, the agency had more than $200,000 in past-due overtime agriculture inspection bills, of which more than $160,000 is more than a year past-due. Some bills are as old as 2004, and one company has more than $9,000 in past-due bills that were issued from 2004 through 2012. Although CBP can and does assess interest for past-due reimbursable overtime bills, it does not consistently deny overtime services to entities with accounts more than 90 days delinquent. APHIS is considering new or updated fees for AQI services. However, the fees might not recover the costs of all commercial trucks. APHIS lost $85 million in revenue in fiscal year 2010 due to capping the annual amount of AQI fees paid by commercial rail, vessels, and trucks, but as of February 2013, the staff recommendations APHIS is considering would remedy only the revenue loss for commercial rail and vessels. According to APHIS data, in fiscal year 2010, the caps on rail and vessel fees resulted in a combined revenue loss of about $46 million, while the caps on truck entries resulted in a $39 million loss for that year. These revenue losses are currently covered by CBP through its annual appropriation or by AQI user fees collected from other pathways. As we have previously reported, charging users the full cost of the inspection they are receiving can promote economic efficiency and equity by assigning costs to those who both use and benefit from the services being provided. Commercial trucks seeking entry into the United States can either pay the $5.25 AQI fee each time they cross the border, or they can pay a one- time flat AQI fee of $105 each calendar year. To pay the annual AQI fee, trucks must use an electronic transponder which must be purchased in advance. Although the $105 annual AQI truck transponder fee is equivalent to paying for 20 arrivals each year, according to APHIS data, in 2010, trucks with a transponder cross the border 106 times a year on average. In Otay Mesa, California, for example, we observed trucks which CBP officers told us typically make up to three to four border crossings a day, dropping off their cargo nearby and returning for another shipment. APHIS is considering raising the per-entry truck fees to more closely align fees with costs. To encourage use of truck transponders, APHIS is considering setting the fee rate for transponders at a rate equivalent to the price of 40 arrivals but still well below the average number of arrivals for trucks with transponders. In this way, APHIS hopes to provide a financial incentive to use transponders to both minimize CBP’s administrative burden (by reducing the number of fee collection transactions at the border) and to reduce wait times at border crossings. According to a CBP estimate, trucks with transponders save at least 10 minutes when crossing the border because they do not have to pay the fee at the time of crossing, benefiting trucking firms and shippers. This time savings is, in and of itself, another incentive for truck transponder use. Shorter wait times at the border also support the CBP mission to foster international trade. The contractor assisting APHIS with its fee review did not propose a way for APHIS to better align truck fees with the full cost of truck inspections while still incentivizing the use of transponders, but noted that for the long term, APHIS should look into other possible alternatives, including examining the feasibility of implementing toll-based transponders, which would allow trucks to pay for each crossing while still retaining a low administrative burden for CBP and time savings of the current transponder system. Table 2 demonstrates, for illustrative purposes only, various combinations of per-entry and annual transponder fee rates to more closely align commercial truck fees with costs under the current system. For example, one example adds a portion of the cost of inspecting trucks with transponders to the per-arrival fee for trucks, which would provide an incentive for the use of transponders (see table 2). In another example, trucks could purchase different “packages” of arrivals at a discounted rate (50 arrivals, 100 arrivals, 200 arrivals, etc.). In commenting on a draft of this report, APHIS officials said that because the distribution around the mean number of arrivals is unknown, it would be difficult to determine the effects of a change in truck transponder pricing. As previously discussed, although APHIS has authority to charge AQI fees to all international passengers, it currently only charges fees to international commercial air passengers.not considering fees for international passengers aboard private aircraft, private vessels, buses, and railcars, citing administrative burdens and anticipated challenges relating to collecting these fees. Because APHIS does not currently charge fees to inspect these passengers, these costs are covered by CBP’s annual appropriations or AQI fees paid by other users. This reduces economic efficiency and equity of the fees because As of February 2013, APHIS is the costs of the inspections are not assigned to those who both use and benefit from them. APHIS’s authority permits it to charge all passengers for the cost of inspecting both passengers and the vehicle in which they arrive, but does not always permit APHIS to do the reverse; that is, to include in the vehicle AQI fees the cost of inspecting the passengers arriving in the vehicle. Charging the cost of inspecting bus, private aircraft, private vessel, and rail passengers and the vehicles in which they arrive to the passengers themselves would be administratively burdensome because there is no existing mechanism for collecting fees from these classes of passengers. However, in several instances, CBP can and does charge customs fees—fees collected to help offset the costs of customs inspections—to private vehicles rather than the passengers. If APHIS had statutory authority to charge all vehicles in which passengers travel, rather than only the passengers themselves, then APHIS could leverage existing customs fee collection mechanisms to minimize administrative burden in collecting AQI fees. We previously recommended that USDA and DHS develop a legislative proposal, in consultation with Congress, to harmonize customs, immigration, and AQI fees. To date, a proposal to harmonize these three fees has not been introduced. Bus passengers. The cost of bus passenger inspections totaled about $23 million, or about $4 per passenger, in fiscal year 2011. CBP officials told us that it would be difficult to collect the fee from individual passengers. In June 2012, our limited observations of the inspection process for bus passengers at San Ysidro, California, revealed logistical challenges consistent with these concerns. In this port, bus passengers get off the bus and are processed along with pedestrians crossing the border, which would make it difficult to properly separate out and charge a fee only to bus passengers. To avoid these kinds of logistical challenges, bus passenger fees could be collected using the air passenger fee model in which the fee is collected by the airline and then remitted to APHIS periodically. However, APHIS’s fee review noted that barriers to entry for the bus passenger industry are lower than air and cruise vessel industries—which could mean a large and changing list of bus companies from which APHIS would need to collect fees. Because of this, an APHIS official stated, this type of remittance model could be burdensome to maintain and audit. The official also told us that APHIS has discussed both a possible transponder approach to collect fees for buses, and an approach in which buses with over 15 seats and buses with fewer than 15 seats pay different fee rates. In commenting on a draft of this report, APHIS officials said that due to logistical challenges, they would have to seek new legislative authority to allow for the collection of fees for the bus rather than charging a fee for the individual passenger. Private aircraft and private sea vessels. The total cost of inspecting private aircraft passengers in fiscal year 2011 was about $11 million, which equates to approximately $34 per passenger or $93 per aircraft for each arrival. The cost of inspecting private vessel sea passengers for fiscal year 2011 was about $4.9 million, which equates to approximately $20 per passenger or $61 per vessel for each arrival. As stated above, AQI’s statute authorizes it to charge passengers, but not the private aircraft or vessels in which those passengers arrive. However, CBP charges a customs fee of $27.50 per year for each private plane and vessel at least 30 feet long. Absent a change in APHIS’s statutory authority allowing it to charge private aircraft and vessels for AQI services, APHIS and CBP cannot leverage the CBP infrastructure already used to collect customs inspections fees for private aircraft and vessels. APHIS considered the effect of charging new fees for private aircraft and vessels, but as of February 2013, the fees APHIS is considering might not recover the costs of AQI services for these users. APHIS’s fee review noted that it would be relatively easy to administer an annual fee on private aircraft or vessels using CBP’s current process, but concluded that the potential revenue would be very small. However, the potential revenue from such a fee would be greater than the AQI fees currently assessed on freight rail. It is also worth noting that even if an AQI vessel fee was piggybacked onto the customs vessel fee, vessels presenting similar agriculture risks may not all be subject to an AQI fee. As mentioned above, CBP’s customs fee applies to private vessels that are at least 30 feet long. However, one CBP official told us that many private vessels arriving at his port are only about 20 feet long and thus are not required to pay the customs fee, but that these vessels still present agriculture risks similar to larger vessels because 20-foot vessels are large enough to store food. According to APHIS officials, APHIS has not assessed the agricultural risks posed by smaller vessels and said that the risks would likely vary at each port. Rail passengers. Rail passenger inspections cost the AQI program about $1.6 million in fiscal year 2011, or almost $6 per passenger. As stated previously, AQI’s statute authorizes it to charge rail passengers seeking to enter the country for the costs of inspecting the passengers as well as the railcar in which they are riding. CBP charges a customs inspection fee for each passenger railcar, but APHIS does not charge an AQI fee. Absent a change, APHIS and CBP cannot leverage the infrastructure used for a per-car fee for customs inspections currently charged for the arrival of each railroad car carrying passengers. In 2005 APHIS set AQI commercial vessel fees—which are levied on cruise and cargo vessels alike—to cover the costs of inspecting vessel passengers. According to its authorizing statute, APHIS may set fees to cover the costs of AQI services for arriving international passengers, and commercial aircraft, trucks, vessels, and railcars. The amount of the fee must be commensurate with the costs of AQI services for each pathway (i.e., class of passengers or entities paying the fees), preventing cross- subsidization of costs between users in setting the fee rates. The way the fees are currently set, the vessel fee includes the cost of inspecting vessel passengers, such as passengers arriving on cruise ships. APHIS is considering replacing the cruise vessel fee with a sea passenger fee that would recover the costs of inspecting both sea passengers and the cruise vessels. The cost of inspecting cruise passengers for fiscal year 2011 was about $17.9 million. Charging an inspection fee to sea passengers would not require a new collections infrastructure because commercial vessel passengers currently pay user fees for customs inspections, which are remitted to CBP by the party—such as the cruise line—issuing the ticket or travel document. As we mentioned previously, in 2008 we recommended that DHS develop a legislative proposal, in consultation with Congress, to harmonize the customs, immigration, and AQI fees. To date, a proposal to harmonize these three fees has not been introduced. In addition, we previously reported that existing collection mechanisms can be leveraged to minimize administrative burden in collecting fees. APHIS is considering a new fee for treatments and monitoring but might not change current AQI policy for two other specialized AQI services— permits for importing commodities and monitoring of garbage compliance agreements—that benefit only a limited set of users yet the costs are borne by other AQI fee payers. By continuing to include the costs of these specialized services in the regular AQI fees for each pathway, the users that benefit most from these services do not know how much they are paying for these services—which may encourage overuse of these services—while other fee payers are paying for services they do not use. As we have previously reported, a more tailored, user-specific approach to fee-setting better promotes equity and economic efficiency by assigning costs to those who use or benefit from the services. APHIS does not track costs separately for conducting and monitoring of treatments, so it cannot identify the specific costs related to each activity. The contractor’s report recommended that they do so. monitors these treatments, generally at no additional costs to the importer, to ensure compliance with APHIS policies and procedures. Second, and less commonly, in certain instances APHIS provides both treatment and monitoring services for certain commodities, generally at no additional cost to the importer. Because the cost of treatment and monitoring provided by APHIS is bundled into the AQI fees for air cargo, maritime cargo, commercial trucks and rail cargo, these services— including those for repeat offenders who require treatments regularly— are subsidized by other shippers. Further, importers may not be aware of the costs being incurred for APHIS’s treatment and monitoring services. Directly charging importers for these services may encourage importers to work with growers whose products do not regularly require treatment because importers would directly incur the costs of the treatments. In keeping with basic economic principles, this may also improve the economic efficiency of the fees. Import commodity permits. Permits are required to import and transport certain agricultural commodities. Although APHIS has authority to charge for permits, under the current system these services are paid for indirectly through the AQI fees. In fiscal year 2011, APHIS issued 12,152 permits for the import of commodities such as wood products, plants, and soil. Multiple commodities can be listed on a single permit, which is valid for that importer for a year. APHIS spent about $13 million in fiscal year 2011 on permit-related activities; as mentioned previously, the cost of these permits is included in the regular inspection fees for air cargo, maritime cargo, trucks, and rail cargo. As such, importers may not be aware of the cost incurred for their permit application and adjudication, which may lead to inefficient use of APHIS resources if importers “overpurchase” permit applications. According to APHIS officials, importers sometimes obtain permits that they do not use. The contractor’s report proposed a charge of $1,075 for each commodity permit and $1,775 for each pest permit. However, APHIS officials were concerned that charging for permits may create an unintended barrier to trade and retaliatory actions by other countries with which we trade. Monitoring of compliance agreements for regulated garbage. Costs related to monitoring compliance with regulated garbage agreements were projected to be about $36 million in fiscal year 2013. CBP monitors compliance agreements for disposal of regulated international garbage but does not currently charge additional fees for these services. APHIS guidance requires that agriculture specialists monitor all facilities with compliance agreements quarterly—generally airports and seaports that serve international travel. In addition, officials stated that certain ships, such as cruise ships, have compliance agreements and the disposal of their garbage is regularly overseen by CBP agriculture specialists. APHIS might continue to include these costs in inspection fees for air, maritime, truck, and rail cargo rather than capture them under a separate fee for monitoring compliance agreements. The fees APHIS is considering would recover imputed costs paid by the Office of Personnel Management and the Department of Labor on behalf of APHIS and CBP and attributable to the AQI program. By incorporating some imputed costs in its analysis of AQI program costs, APHIS makes progress in implementing our 2008 recommendation. However, APHIS’s analysis does not include costs of processing AQI collections borne by the Department of the Treasury (Treasury) for costs related to collecting, depositing, and accounting for certain AQI fee collections. We previously reported that agencies authorized to charge full-cost recovery fees could include the Treasury’s cost of collections in their fee rates and deposit these funds into the Treasury. APHIS officials told us that Treasury has not yet provided APHIS with a statement of these costs. However, federal accounting standards specify that when such costs are unknown, a reasonable estimate may be used. CBP’s share of AQI fee revenue is significantly lower than its share of program costs. For example, in fiscal year 2011 (the most recent year for which APHIS could provide this data), CBP incurred 81 percent of total AQI program costs, but received only 60 percent of fee revenues; APHIS incurred 19 percent of program costs but retained 36 percent of the revenues, as shown in table 3. Further, although AQI costs exceeded AQI fee revenues by more than $288 million in fiscal year 2011—a gap that was bridged in part using amounts from CBP’s annual Salaries and Expenses appropriation—APHIS used more than $25 million of the AQI fee collections to increase the AQI reserve balance that year. In 2005, CBP and APHIS agreed that user fee collections should be allocated based on each agency’s expected annual costs. Each fiscal year, APHIS and CBP agree to an estimate of total AQI revenues for that year and how those funds will be allocated between the agencies. For 2006, the agencies agreed on a 61/39 percent split for CBP and APHIS, respectively. Table 4 shows the planned division of revenues between CBP and APHIS for 2010 to 2013. The 63/37 percent split has changed little since the 2006 distribution. Although the 2005 agreement states that AQI funds will be distributed between CBP and APHIS in proportion to each agency’s AQI-related costs, this does not happen in practice. Rather, the 63/37 percent split means that APHIS retains AQI fee revenues sufficient to cover all of its estimated AQI costs—including costs attributable to AQI services for which no fees are authorized or charged—and transfers the remainder of the estimated fee revenues to CBP. In other words, APHIS covers all its AQI costs with AQI fee revenues, while CBP does not. To bridge the resulting gap, CBP uses its annual appropriation. Because the 63/37 percent split is based on estimated revenues, APHIS and CBP developed an adjustment process for when actual AQI fee collections differ from the amount that was expected. When total actual fee collections for the year exceed (or fall short of) the estimate, the difference is added to (or taken from) the shared reserve. As previously mentioned, the shared reserve is money that is carried over each year and is meant to cover both APHIS and CBP needs in the event that fee collections decline unexpectedly. If, however, APHIS’s costs are greater or less than the estimated 37 percent, the difference is added to or taken from a second reserve; as mentioned previously, this is known as the APHIS-only reserve. For example, according to APHIS officials, a USDA hiring freeze has resulted in lower-than-expected APHIS AQI spending in recent years. Specifically, because APHIS costs were lower than the estimated 37 percent in fiscal year 2012, APHIS took a portion of the 37 percent allocated to it and put some of those funds into this second reserve. Figure 3 shows the total actual distribution of AQI program funding among CBP, APHIS, and both reserve funds in fiscal year 2011. APHIS and CBP also adjust the 63/37 percent split as they see how actual revenues compare with estimates. For example, in fiscal year 2011, fee revenues were higher than estimated and APHIS and CBP each received distributions of $1 million more than the initial estimate. Table 5 shows the distributions and obligations of actual AQI fee revenues for recent years. We have previously reported that maintaining a reserve balance is important for fee programs to ensure that program operations can be sustained in case fee revenues decline but workload does not. According to APHIS officials, APHIS’s target balance for the total reserve is 3 to 5 months worth of AQI costs. Officials told us that this level would ensure the stability of the program in case of potential fluctuations in fee volumes, bad debts, unanticipated crises, or the need for one time capital expenditures. The upper end of the target—5 months—is the amount APHIS officials estimate would be needed to completely shut down the inspection program if it were to cease. However, a maximum target balance aligned with more realistic program risks would also allow for lower reserve levels. The rationale for maintaining a reserve balance as a buffer against a complete program shutdown is not as compelling when a fee-funded program also has access to annual appropriations from the general fund, as Congress has an opportunity to weigh its funding priorities on an annual basis. Moreover, our analysis of APHIS’s cost and collection projections shows a higher total reserve balance than the 3- to 5-month target. The total reserve balance was approximately $107 million at the end of fiscal year 2012, which represents about 2.4 months of the AQI program costs paid with AQI fee revenues that year. Our analysis of APHIS data shows that the balance in the total AQI reserve would grow by an estimated $55 million, $75 million, and $96 million in fiscal years 2013, 2014, and 2015, respectively. This would bring the reserve balance to approximately $333 million—or more than triple the fiscal 2012 balance. To further put this amount in perspective, $333 million would have paid more than 7 months of AQI costs paid with fee revenues in fiscal year 2012. An unnecessarily high total reserve balance means that monies that could be used to pay for AQI program costs would instead be carried over for possible future needs. This strategy would increase reliance on CBP’s annual appropriation to pay for current AQI-related costs. APHIS’s projected level for the shared reserve fund exceeds the historical use of the fund (see figure 4). In past crises, APHIS and CBP used much less than APHIS’s total reserve balance target of 3 to 5 months worth of AQI costs. During the financial crisis in fiscal year 2009, AQI collections dropped by more than $46 million compared to the prior year and the reserve fund dropped by about $50 million, reducing the reserve from 2.3 months of fiscal year 2008 costs paid with fee revenues to 1.1 months of fiscal year 2009 costs paid with fee revenues, as shown in figure 4. In addition, after the events of September 11, 2001, the reserve fund dropped from approximately $68 million on October 1, 2001, to just less than $45 million on September 30, 2002, reducing the reserve to about 2.5 months of fiscal year 2002 costs paid with fee revenues. APHIS’s collection practices for the AQI fees assessed on railcars are not consistent with APHIS regulations. According to the APHIS fee regulations, railcars seeking to enter the United States may pay AQI fees in one of two ways. First, they can pay a $7.75 fee for each arrival of a loaded commercial railcar. Second, they can prepay a flat fee of $155 annually for a specific railcar. The $155 annual fee is equal to the cost of 20 individual arrivals. According to APHIS officials, no railcar companies choose the $155 flat fee; rather, all choose to pay the $7.75 per arrival fee. However, rather than collecting this fee for each arrival of a loaded railcar (as required by APHIS regulations), APHIS only collects fees for the first 20 arrivals a railcar makes each year. Because of this, in fiscal year 2010, APHIS lost $13.2 million in railcar fee revenue because about 1.7 million railcar arrivals did not pay a fee even though a fee was due. CBP does not verify that it collects applicable user fees for every commercial truck, private aircraft, and private vessel for which the fees are due, resulting in an unknown amount of lost revenue. We have previously reported that internal controls should generally be designed to assure that ongoing monitoring occurs in the course of operations.APHIS and CBP regulations, commercial trucks entering the United States must pay AQI and customs user fees by purchasing an annual transponder or paying the fees upon each arrival. Trucks without transponders pay fees upon arrival by cash, check, or credit card. CBP personnel at ports we visited compared the amount of cash deposited for AQI and customs user fees to the number of cash register transactions to ensure against theft, but did not verify that all trucks that were supposed to pay the fees actually paid the fees. In other words, CBP cannot be sure that it collected these fees from all trucks required to pay them. The Automated Commercial Environment system alerts CBP when an arriving truck does not have a transponder and therefore owes the fee at the time of crossing, but CBP does not require officers to record in the system that the truck has paid the fee, or review this information to verify whether all trucks paid the fees. Similarly, CBP does not consistently verify that all arriving private aircraft and private vessels have a customs user fee decal, as required. As we stated previously in this report, per CBP regulations, private aircraft and private vessels more than 30 feet long arriving in the United States must pay an annual $27.50 customs user fee. As proof of payment, these aircraft and vessels receive a customs user fee decal.review noted, the customs decal could provide an administratively simple mechanism on which to piggyback an AQI fee for private aircraft and vessels. However, absent more rigorous oversight of proper payment for customs decals this strategy would not be as effective as it otherwise could be. As APHIS’s fee For private aircraft, the Advanced Passenger Information System (APIS) can show the customs user fee decal number before arrival. However, APIS neither requires that the decal number be entered nor flags aircraft for which decal numbers are not entered. For private vessels, the Pleasure Boat Reporting System and the Small Vessel Arrival System both include a field for the customs user fee decal number. However, the decal number is not a required field in either system and the systems do not link to the Decal and Transponder Online Procurement System to provide an automated mechanism to verify the decal number. According to CBP officials, CBP officers are to physically verify the decal during their inspection of the aircraft or vessel upon arrival. However, CBP does not verify that this actually occurs, nor are procedures in place nationwide to ensure that CBP officers collect the decal user fee as required if arriving vessels and aircraft lack a valid decal. Further, on one of our site visits to a small airport, the CBP officers conducting the inspections were unfamiliar with the process they should follow if an aircraft arrived without a decal; port records showed that the last time a customs user fee decal had been sold at that airport was in 2010. Later that day, port officials informed us that shortly after our visit an aircraft arrived without a decal and the officers collected the decal fee. We also observed inspections of private vessels that arrived without customs decals; the CBP officer conducting the inspections did not collect the decal user fees, but instead informed the vessel owners of the requirement to get a decal. The AQI program is a key component in the nation’s efforts to protect against exotic diseases and pests and the billions of dollars in damage they can cause. Analyzing and understanding the costs of providing these important services—for which CBP and APHIS have joint responsibility— are important so that the agencies and Congress have the best possible information available to them when designing, reviewing, and overseeing AQI fees and operations. This is especially true given the increasing need for fiscal restraint in an environment of tightening discretionary budgets. By conducting a thorough review of AQI program costs and options for redesigning AQI fees, APHIS has taken important steps in identifying and strengthening the link between AQI program costs and fee collections. However, the current AQI fee structure does not (1) recover full costs from some users, as authorized; (2) charge fees to some passengers that APHIS is authorized to charge but chooses not to for policy reasons; and (3) align fees with the program costs to maximize economic efficiency and equity. As of February 2013, the fees APHIS is considering would not fully remedy these issues (partly because of gaps in AQI’s statutory authority and partly because APHIS chooses not to fully exercise the AQI fee authorities), thus requiring APHIS and CBP to continue to rely on appropriated funds to bridge the historical gap of nearly 40 percent between AQI program costs and collections. Similarly, because the reimbursable overtime rates for agriculture inspections are not aligned with personnel costs to perform the inspections and because not all ports consistently charge for those reimbursable services or collect payment in a timely way, a portion of those costs are subsidized by CBP’s appropriation. Absent authority to either charge all pathways for AQI services or to permit cross-subsidization among pathways when setting fees—that is, allowing fees paid by some users to be set to recover the costs of services provided to other users—the AQI program cannot recover its full costs and must continue to rely on appropriated funds. Furthermore, APHIS does not charge fees in all instances in which the authority exists to do so because administrative costs for collecting fees from certain passengers would be high and the statutory authority limits the recovery of such costs through fees assessed on vehicles in which passengers travel (a method CBP uses for some other inspection fees). Regular, timely, and substantive fee reviews are especially critical for programs—like AQI—that are mostly or solely fee funded to ensure that fee collections and program costs remain aligned. Although APHIS is to be commended for its in-depth review of the AQI user fees and program costs, until APHIS includes all imputed costs when setting fee rates and CBP ensures that its CMIS cost data accurately reflect program costs at all ports, APHIS will not be able to set fees to recover the full costs of AQI services. Because the fee revenues distributed to each agency are not aligned with costs and funding of the AQI reserve is greater than the level needed to address realistic program risks, CBP relies more heavily on its appropriation to fund AQI costs that could otherwise be funded with AQI fee revenues. APHIS and CBP have not followed their 2005 agreement to allocate fee collections based on each agency’s costs, essentially overfunding APHIS and underfunding CBP. Finally, the AQI program is forgoing revenues because CBP and APHIS do not ensure that all fees due are collected. APHIS does not collect railcar fees for the arrival of all railcars in accordance with regulations, and CBP does not use available controls to verify that commercial trucks have paid the AQI fee. Similarly, because CBP does not use available information to verify that all arriving private aircraft and private vessels have valid customs decals, the agency does not have assurance that it is collecting all fees that are due. Until APHIS and CBP improve oversight of these collection processes, they will continue to forgo revenue due the government, which will increase reliance on appropriated funds to cover program costs. In light of declining discretionary budgets, to reduce or eliminate the reliance of the AQI program on taxpayer funding, Congress should consider allowing USDA to set AQI fees to recover the aggregate estimated costs of AQI services—thereby allowing the Secretary of Agriculture to set fee rates to recover the full costs of the AQI program. Congress should consider amending USDA’s authorization to assess AQI fees on bus companies, private vessels, and private aircraft and include in those fees the costs of AQI services for the passengers on those buses, private vessels, and private aircraft. To help ensure that USDA considers full AQI program costs when setting AQI fee rates, we recommend that the Secretary of Agriculture include all imputed costs borne by other federal agencies and attributable to the AQI program, and the Secretary of Homeland Security direct CBP to update and widely disseminate comprehensive guidance to ports on the correct use and review of CMIS codes. Specifically, the guidance should reiterate that a portion of CBP officers’ primary inspection time should be charged to agriculture and cover how, and with what frequency, ports should conduct work studies to determine the correct allocation of staff time. CBP should also consider making CMIS training mandatory for CMIS practitioners. To help ensure that fee rates are set to recover program costs, as authorized, and to enhance economic efficiency and equity with consideration of the administrative burden, we recommend that the Secretary of Agriculture establish an AQI cruise passenger fee aligned with the costs of inspecting cruise passengers and vessels and collected using the existing processes for collecting cruise passenger customs fees; establish a fee for passenger railcars aligned with the costs of inspecting rail passengers and railcars and collected using the existing processes for collecting passenger railcar customs fees; eliminate caps on the commercial vessel and commercial rail AQI fees; set truck fee rates to recover the costs of AQI services for trucks while maintaining a financial incentive for trucks to use transponders; and recover the costs of AQI services for buses and bus passengers by either establishing a bus passenger fee that is remitted by the bus companies or seeking legislative authority to establish a bus fee that covers the costs of bus passenger inspections. To align reimbursable overtime revenues with the costs of those agriculture inspections, we recommend that the Secretaries of Agriculture and Homeland Security work together to amend overtime regulations for agriculture services so that reimbursable overtime rates that CBP and APHIS charge are aligned with the costs of those services; and the Secretary of Homeland Security ensure that ports consistently charge for agriculture overtime services that are eligible for reimbursement and deny agriculture-related reimbursable overtime inspection services to entities with bills more than 90 days past due, consistent with APHIS regulations. To help ensure that AQI fee rates are structured to maximize economic efficiency and equity while minimizing administrative burden, we recommend that the Secretary of Agriculture charge user fees for AQI permit applications; charge user fees for treatment services; and charge user fees for the costs of monitoring compliance agreements for regulated garbage. To better align the distribution of AQI fee revenues with AQI costs, we recommend that the Secretaries of Agriculture and Homeland Security work together to allocate AQI fee revenues consistent with each agency’s AQI costs, and the Secretary of Agriculture establish an AQI reserve target that is more closely aligned with program needs and risks, based on past experience. To ensure that inspection fees are collected when due, we recommend that the Secretary of Agriculture revise its processes for collecting AQI railcar fees to conform to USDA regulation and the Secretary of Homeland Security establish internal controls to alert personnel when fees are not paid, and use available information to verify that arriving trucks, private aircraft, and private vessels pay applicable inspection user fees. We provided a draft of this report to the Secretaries of Agriculture and Homeland Security for their review and comment. We received written comments from USDA and DHS, which are reprinted in appendixes III and IV, respectively. In addition, both agencies provided technical comments, which we incorporated as appropriate. DHS concurred with our recommendations and described corrective actions the agency plans to take to implement them. USDA agreed with the majority of the recommendations we made to the Secretary of Agriculture. However, USDA said that with respect to nine of the recommendations, the agency is preparing to initiate notice and comment rulemaking regarding the AQI fees. Therefore, USDA stated, it would be inappropriate to firmly commit to any particular component or a specific amount of fees at this time. USDA commented that, at this time, they cannot agree with our recommendation to establish a fee to recover the costs of AQI services for buses and bus passengers, but that they would work with CBP to assess whether USDA should seek authority to establish a bus fee that covers the cost of bus passenger inspections and whether such a fee would be practical. As we stated in our report, we recognize that USDA may not currently have the authority to assess this fee on the vehicles rather than the passenger. We continue to believe that APHIS should recover the costs of AQI services for bus passengers, as authorized, or seek legislative authority to establish a bus fee that covers the costs of bus passenger inspections. We continue to encourage APHIS and CBP to explore options for implementing such a fee in a way that would minimize the administrative burden of the fee. USDA disagreed with our recommendation to charge user fees for the costs of monitoring compliance agreements for regulated garbage, stating that compliance agreements save money because the agency does not need to provide a service, and that charging a fee to those that provide the service would be a disincentive to enter into such an agreement. However, APHIS regulations state that any person engaged in the business of handling or disposing of garbage must first enter into a compliance agreement with APHIS. USDA further asserted that recovering the costs of compliance agreements through the current AQI fees is fair and simple. However, the costs of compliance agreements being paid through AQI fees assessed on cargo pathways (air, vessels, trucks, and rail) benefit entities that handle garbage for users that do not pay AQI fees, including private aircraft and private vessels. We continue to believe that the users of these specialized services should be charged directly, consistent with Circular A-25, promoting efficiency and equity by ensuring that the beneficiaries of the service pay for the service. We are sending copies of this report to the Secretaries of Agriculture and Homeland Security, the appropriate congressional committees, and other interested parties. In addition, the report is available at no charge on GAO’s website at http://www.gao.gov. Should you or your staff have any questions about this report, please contact me on (202) 512-6806 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. To analyze the Agricultural Quarantine Inspection (AQI) fees, we assessed (1) the AQI fees currently charged and how, if at all, the proposed revisions would improve efficiency, equity, and revenue adequacy, and reduce administrative burden; (2) how, if at all, changes to the allocation of fee revenues between the Department of Agriculture (USDA) and the Department of Homeland Security (DHS) could improve efficiency, equity, and revenue adequacy, and reduce administrative burden; and (3) the extent to which Animal and Plant Health Inspection Service (APHIS) and U.S. Customs and Border Protection (CBP) fee collection processes provide reasonable assurance that all AQI fees due are collected. To address these objectives we analyzed the AQI fees using principles of effective user fee design—specifically, efficiency, equity, revenue adequacy, and administrative burden—on which we previously reported. These principles draw on various laws and federal guidance. To assess the current AQI fees and proposed revisions, we examined documentation provided by APHIS related to the activity-based cost model APHIS and the contractor used to analyze AQI costs and the AQI fee structure; observed a demonstration of CostPerform, the software used for the activity-based costing; and analyzed cost and fee revenue data and documentation provided by both APHIS and CBP. We also interviewed APHIS officials responsible for the review and fee-setting process. To assess the reliability of data from the activity-based costing model, we reviewed whether costs were ascribed to activities in a logical manner and discussed the reliability of the data with knowledgeable agency officials. Based on these assessments, we determined that the AQI cost data from the activity-based costing model were sufficiently reliable for our purposes. We reviewed the analysis of the economic impact of proposed changes to fee rates, which was performed as part of the fee review. This analysis evaluated the economic impact of proposed fee scenarios on both the U.S. economy and selected industries to determine if any fee scenarios considered would create an unreasonable burden on these industries or consumers. Specifically, a contractor analyzed short and long-run economic impacts by evaluating the impact on the price of individual goods and services, corresponding changes in U.S. consumer purchases, and the resulting impact throughout the U.S. economy. All scenarios showed economic impacts that were very small relative to the size of the affected sectors and had an overall minimal impact on the national economy. Because the contractor found the effects to be minimal, it did not apply behavioral responses to changes in fee prices to the proposed fees. To examine how changes to the allocation of fee revenue could improve efficiency, equity, and revenue adequacy, and reduce administrative burden, we compared the existing and proposed fee structures to applicable statutes and regulations and to criteria from GAO’s User Fee Design Guide. We used APHIS and CBP data to analyze AQI costs and fee collections. We also discussed fee design options with APHIS and CBP officials. Further, we analyzed the extent to which CBP attributes a portion of primary inspection time to agriculture-related cost accounting codes by analyzing data from CBP’s cost management information system. In addition, to examine how APHIS and CBP fee collection processes have ensured that all AQI fees are collected, we interviewed APHIS and CBP officials, examined documents related to fee collection procedures, and observed fee collection processes at ports of entry. To assess the reliability of the CBP and APHIS data, we analyzed the data for internal consistency and discussed the data with CBP and APHIS officials. We also compared the APHIS data on collections and obligations of AQI fee revenue and AQI reserve balances to another published source of this information and found them to be consistent. Based on these assessments, we determined that the CBP and APHIS data were sufficiently reliable for our purposes. To address all of these objectives, we visited a nonprobability sample of seven ports of entry to observe CBP inspection procedures and discuss issues related to AQI user fees. We determined that, for our purposes and considering resource constraints, seven is a sufficient number of site visit ports. We visited the ports of Blaine, Washington; Miami, Florida; Otay Mesa in San Diego, California; Port Huron, Michigan; San Diego, California; San Ysidro, California; and Seattle, Washington. We selected these ports of entry based on entry pathways, particularly those that charge fees, such as commercial rail and commercial vessels; volume of entries; diversity of inspection challenges; and geographic proximity to each other. We also visited APHIS Plant Protection and Quarantine (PPQ) offices in Miami, San Diego, and Seattle to understand the AQI- related work being conducted by APHIS in the field. We determined that a nonprobability sample was sufficient for our purposes because we used the site visit information to understand commonalities and differences in inspection practices and fee collection processes at various ports and for illustrative examples of how fee design and implementation affect equity, efficiency, revenue adequacy, and administrative burden. Because we used a nonprobability sample, the information we obtained from these visits cannot be generalized to other CBP ports of entry. On the site visits, we interviewed CBP and APHIS officials and observed agriculture inspections and AQI fee collection processes. We also interviewed AQI program stakeholders, including ship agents and customs brokers. We conducted a content analysis on our site visit interviews and observations to identify common themes. We conducted this performance audit from April 2012 to March 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Jacqueline M. Nowicki, Assistant Director, and Susan Etzel, Senior Analyst in Charge, managed all aspect of this assignment. Laurel Plume and Alexandra Edwards made key contributions to this report. Michelle Cooper, Kate Lenane, Felicia Lopez, Mary Denigan-Macauley, Rebecca Gambler, Sarah McGrath, Donna Miller, Cynthia Saunders, Anne Stevens, and Jack Warner also made important contributions.
The AQI program guards against agriculture threats by inspecting international passengers and cargo at U.S. ports of entry, seizing prohibited material, and intercepting foreign agricultural pests. The program, which cost $861 million in 2011, is funded from annual appropriations and user fees. GAO has reported several times on the need to revise the fees to cover program costs as authorized. In 2010, APHIS initiated a review of AQI costs and fee design options. APHIS and CBP are considering options for a new fee structure. Pending departmental approval, APHIS expects to issue a proposed rule in fall 2013. GAO was asked to examine issues related to the AQI fees. This report examines 1) the fees currently charged and proposed revisions; 2) how fee revenues are allocated between the agencies; and 3) the extent to which fee collection processes provide reasonable assurance that all AQI fees due are collected. To do this, GAO reviewed AQI fee and cost data, and relevant laws, regulations, and policies; observed inspections at ports of entry; and interviewed APHIS and CBP officials. GAO's analysis of the Agricultural Quarantine Inspection (AQI) fee and cost data revealed a more than $325 million gap between fee revenues and total program costs in fiscal year 2011, or 38 percent of AQI program costs. The program, which is co-administered by the Department of Agriculture (USDA) Animal and Plant Health Inspection Service (APHIS) and Department of Homeland Security (DHS) Customs and Border Patrol (CBP), has a gap for several reasons: 1) APHIS's authority does not permit it to charge all persons seeking entry to the United States (e.g., pedestrians) and does not permit it to charge the costs of those inspections to others; 2) APHIS has chosen not to charge some classes of passengers, citing administrative fee collection difficulties; 3) CBP does not charge a portion of all primary inspections to agriculture functions, as required by CBP guidance; 4) APHIS does not consider all imputed costs (that is, costs incurred by other agencies on behalf of the AQI program) when setting fees; and 5) the allowable rates for overtime services are misaligned with the personnel costs of performing those services. APHIS is considering fees that would better align many, but not all, AQI fees with related inspection activity costs. APHIS and CBP can take additional steps to better align fees with costs; however, additional authority will be needed to fully recover all program costs. Contrary to APHIS-CBP agreements and APHIS policy, the distribution of fee collections between CBP and APHIS is significantly misaligned with AQI costs. In 2005, CBP and APHIS agreed to divide AQI collections in proportion to each agency's share of AQI costs. However, in fiscal year 2011, for example, CBP incurred over 80 percent of total program costs but received only 60 percent of collections, while APHIS incurred 19 percent of program costs but retained 36 percent of collections. CBP bridges the gap between its AQI costs and its share of the fee revenues with its annual appropriation. In keeping with its authorities and with good practices for fee-funded programs, APHIS carries over a portion of AQI collections from year to year to maintain a shared APHIS-CBP reserve to provide a cushion against unexpected declines in fee collections. APHIS's stated goal is to maintain a 3- to 5-month reserve but the preliminary fee proposal would fund the reserve at a level higher than the 5 month maximum. Further, the 5-month maximum target balance is the amount officials say they would need to completely shut down the program, and therefore does not reflect realistic program risks. Further, this is more than the amount required to cover shortfalls during both the 2009 financial crisis and the events of September 11, 2001, and would increase reliance on appropriated funds to cover current program costs. APHIS's and CBP's collection processes do not provide reasonable assurance that all AQI fees due are collected. Specifically, APHIS does not collect AQI fees for railcars consistent with its regulations, resulting in a revenue loss of $13.2 million in 2010. Further, CBP does not verify that it collects fees due for every commercial truck, private aircraft, and private vessel, resulting in an unknown amount of revenue loss annually. CBP has tools available to help remedy these issues but does not require their use. Until APHIS and CBP improve oversight of these collection processes, they will continue to forgo revenue due the government, which will increase reliance on appropriated funds to cover program costs. GAO is making a number of recommendations aimed at more fully aligning fees with program costs, aligning the division of fees between APHIS and CBP with their respective costs, and ensuring that fees are collected when due. Further, GAO suggests Congress amend the AQI fee authority to allow the Secretary of Agriculture to set fee rates to recover the full costs of the AQI program. USDA and DHS generally agreed with the recommendations.
The overall process used to implement USERRA is as follows. Outreach and resolution of informal complaints. DOD and DOL share responsibility for outreach—the education of servicemembers and employers about their respective responsibilities under USERRA. Much of DOD’s outreach is accomplished through ESGR, which performs most of its work through over 4,000 volunteers. DOL conducts outreach through its Veterans’ Employment and Training Service (VETS) investigators, who are located nationwide. These investigators conduct briefings to educate employers and servicemembers about USERRA requirements and responsibilities and handle service-related employment and reemployment questions that are directed to their offices. Servicemembers who have USERRA-related issues with their employers can file informal complaints with DOD’s ESGR. In our February 2007 report, we noted that nearly 10,000 informal complaints had been filed with ESGR in fiscal years 2004 and 2005. A subgroup of ESGR’s specially trained volunteers serve as impartial ombudsmen who informally mediate USERRA issues that arise between servicemembers and their employers. Formal complaints and prosecution. When ESGR ombudsmen cannot resolve complaints informally, they notify servicemembers about their options. Servicemembers can file a formal complaint with DOL or file complaints directly in court (if it involves nonfederal employers) or the Merit Systems Protection Board (if it involves federal executive branch employers). Under a federal sector demonstration project established by the Veterans Benefits Improvement Act of 2004, DOL investigates complaints against federal executive branch agencies for individuals whose social security numbers end in even numbers, and OSC is authorized to directly receive and investigate complaints and seek corrective action for individuals whose social security numbers end in odd numbers. When a servicemember files a formal complaint with DOL, one of VETS’s 115 investigators examines and attempts to resolve it. If VETS’s investigators are unable to resolve servicemember complaints, DOL is to inform servicemembers that they may request to have their complaints referred to DOJ (for complaints against private sector employers or state and local governments) or to OSC (for complaints against federal executive branch agencies). Before complaints are sent to DOJ or OSC, they are reviewed by a VETS regional office for accuracy and sufficiency and by a DOL regional Office of the Solicitor, which assesses the legal basis for complaints and makes an independent recommendation. If DOJ or OSC determines that the complaint has merit, it will attempt to resolve the complaint without litigation and, if unsuccessful, represent the complainant in court (for those referred to DOJ) or before the Merit Systems Protection Board (for those referred to OSC). Figure 1 shows servicemembers’ options for obtaining federal assistance with their USERRA complaints. Agency databases and reporting requirement. Each of the four federal agencies responsible for assisting servicemembers under USERRA maintains an automated database with complaint information. Both DOD and DOL have electronic complaint files that are stored in automated systems with query capabilities. The Secretary of Labor in consultation with the U.S. Attorney General and the Special Counsel prepares and transmits a USERRA annual report to Congress on, among other matters, the number of USERRA claims reviewed by DOL, and during the current demonstration project by OSC, along with the number of claims referred to DOJ or OSC. The annual report is also to address the nature and status of each claim, state whether there are any apparent patterns of violation of the USERRA provisions, and include any recommendations for administrative or legislative action that the Secretary of Labor, the U.S. Attorney General, or the Special Counsel consider necessary to effectively implement USERRA. Although USERRA defines individual agency roles and responsibilities, it does not make any single individual or office accountable for maintaining visibility over the entire complaint resolution process. In our October 2005 report, we noted that the ability of federal agencies to monitor the efficiency and effectiveness of the complaint process was hampered by a lack of visibility resulting, in part, from the segmentation of responsibility for addressing complaints among multiple agencies. Moreover, from the time informal complaints are filed with DOD’s ESGR through final resolution of formal complaints at DOL, DOJ, or OSC, no one entity has visibility over the entire process. We found that the agency officials who are responsible for the complaints at various stages of the process generally have limited or no visibility over the other parts of the process. As a result, federal agencies have developed agency-specific output rather than cross-cutting goals directed toward resolving servicemembers’ complaints. For example, agency goals address the complaint processing times of each stage of the process, rather than the entire time that elapses while servicemembers wait to have their complaints addressed. Meanwhile, the servicemember knows how much time is passing since the initial complaint was filed. In October 2005, we reported that more than 430 of the 10,061 formal complaints filed with DOL between October 1, 1996, and June 30, 2005, were closed and reopened and 52 complaints had been closed and reopened two or more times. Our analysis of those 52 complaints showed that the processing times averaged about 3 to 4 months but the total elapsed times that servicemembers waited to have their complaints fully addressed averaged about 20 to 21 months from the time they first filed their initial formal complaints with DOL until the time the complaints were fully addressed by DOL, DOJ, or OSC. We have previously suggested and continue to believe that Congress should consider designating a single individual or office to maintain visibility over the entire complaint resolution process from DOD through DOL, DOJ, and OSC. We believe this would encourage agencies to focus on overall results rather than agency-specific outputs and thereby improve federal responsiveness to servicemember complaints that are referred from one agency to another. In response to this matter, in our 2005 report, both DOL and OSC were supportive, and both agencies noted that they had the expertise to oversee the USERRA complaint resolution process. However, DOL stated that with the mandated demonstration project ongoing, it would be premature to make any suggestions or recommendations for congressional or legislative action until the project has been completed. DOD and DOJ did not provide comments on this matter. Integral to getting servicemembers the help they need is educating them and their employers on their respective responsibilities under USERRA. Since 2002, we have reported on DOD’s need to obtain complete and accurate information on the civilian employers to better target its outreach efforts. Accurate, complete, and current civilian employer information is important to DOD to improve its ability to target outreach to employers, to make informed decisions concerning which reservists should be called for active duty to minimize the impact that mobilizations might have on occupations such as law enforcement, and to determine how businesses may be affected by reserve activation. As we recommended in our 2002 report, DOD implemented regulations that required the reporting and collection of employer information for reserve personnel. Additionally, DOD established compliance goals for these servicemembers. We noted in our February 2007 report that the percentage of servicemembers reporting employer information to DOD had increased, but most reserve components had still not reached their compliance goals. In addition, we found that employment data were not necessarily current because some reservists were not aware of requirements to update their employer information and the services had not established a formal mechanism to remind reservists to update this personnel information as necessary to reflect changes in their current employment. To improve the reporting of National Guard and Reserve employment information, we recommended that the Secretary of Defense direct the Office of the Assistant Secretary of Defense for Reserve Affairs to establish specific time frames for reservists to report their employment data, set specific time frames for reserve components to achieve the established compliance reporting goals, and direct the service components to take action to ensure reporting compliance. In response to this recommendation, DOD indicated at the time of our report that its current policy on employer reporting established compliance goals. We noted in our report that DOD needed to establish a new deadline by which reservists must report their employer information to DOD and set specific time frames for reserve components to achieve the established compliance reporting goal. In addition, to encourage reservists to keep their employer data current, we recommended that DOD instruct all military departments to establish a formal review mechanism that would require all reservists to review and update at least annually their reported employment-related information. At the time of our February 2007 report, DOD was in the process of revising its policy on civilian employer reporting to require an annual review of reported employer information. DOD provides USERRA outreach and education to servicemembers using several mechanisms, including a toll-free information line and individual and group briefings. DOD monitors the extent to which it reaches this population and the occurrence of USERRA-related problems by including questions on these areas in its Status of the Forces survey, which is periodically conducted to identify issues that need to be addressed or monitored. We noted in our 2005 report that survey questions offer the potential to provide insight into compliance and employer support issues. However, questions on the surveys vary from year to year and have not always included those pertaining to USERRA compliance and employer support. To gauge the effectiveness of federal actions to support USERRA by identifying trends in compliance and employer support, we recommended that the Secretary of Defense direct the Under Secretary of Defense for Personnel and Readiness to include questions in DOD’s periodic Status of Forces Surveys to determine the extent to which servicemembers experience USERRA-related problems; if they experience these problems, from whom they seek assistance; if they do not seek assistance, why not; and the extent to which servicemembers’ employers provide support beyond that required by the law. In response to this recommendation, DOD incorporated these additional USERRA-related questions in its June 2006 Status of the Forces survey. Because the resolution of servicemember complaints could involve multiple federal agencies, it is important that the agencies be able to effectively share and transfer information to efficiently process servicemember complaints. In October 2005, we found that the automated systems that DOD, DOL, DOJ, and OSC used to capture data about USERRA complaints were not compatible with each other. As a result, information collection efforts were sometimes duplicated, which slowed complaint processing times. To increase federal agency responsiveness to USERRA complaints, we recommended that DOD, DOL, DOJ, and OSC develop a system to allow the electronic transfer of complaint information. At the time of our report, DOL and OSC concurred with this recommendation, DOJ did not provide comments, and DOD deferred to the other agencies. We noted in our February 2007 report that DOL had implemented an enhancement to its USERRA database in October 2006 to enable the four USERRA coordinating agencies to electronically transfer case information between agencies. The database enhancement allowed DOD, DOL, DOJ, and OSC to access and update the status of cases using the Internet and produce a report containing aggregate USERRA complaint data on the cases over which they have jurisdiction. We further noted in this report that, despite these enhancements to the USERRA database to allow the electronic transfer of complaint information between agencies, DOD only had visibility over those cases that originated with informal complaints to DOD. Even though DOD shares responsibility with DOL for USERRA complaints, DOD did not have access to all USERRA complaint data, and we recommended that DOL provide these data to DOD’s ESGR. In response to this recommendation, in October 2007, DOL provided DOD with the ability to view and download aggregate information on all USERRA cases in its database. In addition, in October 2005, we reported that when a complaint is referred from DOL to OSC or DOJ, the agencies are unable to efficiently process complaints because they are forced to create, maintain, copy, and mail paper files to other DOL offices and to OSC and DOJ. To reduce administrative burden and improve oversight of USERRA complaints processing, we recommended that DOL develop a plan to reduce reliance on paper files and fully adopt the agency’s automated complaint file system. DOL concurred with this recommendation and, as a result, is developing an electronic case record system, scheduled for completion in October 2008, that will allow all agencies assigned to the case an opportunity to review documents and add investigative notes or records. To effectively identify trends in issues facing servicemembers, it is important in a segmented complaint resolution process that the complaint data generated by each of the federal agencies be sufficiently comparable. In our February 2007 report, we noted that the complaint categories used by each of the four agencies could not be uniformly categorized to reveal trends in USERRA complaints. In particular, we noted that the complaint data collected by DOD and DOL, the two agencies that see the highest volume of cases, were not categorized in a way that is conducive to meaningful comparison. Specifically, we found that the two agencies use different categories to identify reservists’ USERRA complaints for issues such as being refused job reinstatement, denied an appropriate pay rate, or being denied vacation time. To allow for the analysis of trends in reporting USERRA complaints, we recommended that DOD and DOL adopt uniform complaint categories in the future that would allow aggregate trend analysis to be performed across the databases. At the time of our report, both DOD and DOL agreed with this recommendation. Since that time, DOD and DOL have collaborated to identify common complaint categories that will allow both agencies to match similar USERRA complaints. According to officials from both DOD and DOL, these complaint categories are expected to be pilot tested in fiscal year 2008. As reservists continue to be exposed to serious injury in operations in Iraq and Afghanistan, the ability to identify disability reemployment complaints becomes more critical. However, we noted in our February 2007 report that the four federal agencies responsible for assisting servicemembers with USERRA complaints could not systematically record and track disability-related complaints. Additionally, we found that these agencies do not distinguish disability-related complaints from other types of complaints for tracking and reporting purposes. For example, the servicemember must indicate that the case involves a disability for it to be classified as such, and these complaints may not be distinguishable from any other types of complaints because a single USERRA complaint may involve a number of issues that complicates the classification of the case by the agency. Further, disability-related complaints are not identified using consistent and compatible complaint categories. DOD classifies USERRA disability-related complaints within three categories including medical benefits, job placement, and time limits for reemployment, while DOL uses one category, reasonable accommodation and retraining for disabled, to classify USERRA disability-related complaints. To provide agencies with better information about disability-related employment complaints, we recommended that DOL develop a system for recording and tracking these complaints and share it with the other agencies that implement USERRA. DOL concurred with this recommendation at the time of this report. According to DOL officials, DOL’s USERRA database identifies disability claims, and the agency has recently provided DOD, OSC, and DOJ with access to this system. As previously mentioned, the Secretary of Labor is required to provide an annual report to Congress that includes information on the number of USERRA complaints reviewed by DOL, along with the number of complaints referred to DOJ or OSC. We noted in our February 2007 report that DOL’s report to Congress does not include information on informal complaints filed with ESGR. Therefore the complaint data that DOL reported to Congress for fiscal years 2004 and 2005 did not include 80 percent, or 9,975 of the 12,421 total informal and formal USERRA complaints filed by reservists during that period. Without data from ESGR, Congress has limited visibility over the full range of USERRA issues that reservists face following deployment. Further, without these data, Congress may lack the information for its oversight of reserve employment matters. To gain a full perspective of the number and nature of USERRA complaints filed by reservists in gaining reemployment upon returning from active duty, we suggested that Congress consider amending the reporting requirement to require DOL to include data from DOD’s ESGR in its annual report to Congress. In response to this matter for congressional consideration, Members of Congress are considering changes to the legislation. In addition to DOL’s report to Congress not reflecting informal USERRA complaints, we identified data limitations in our July 2007 report that affected the quality of information reported to Congress that could adversely affect Congress’s ability to assess how well federal sector USERRA complaints are processed and whether changes are needed. DOL provides information in its annual report to Congress on the number and percentage of complaints opened by type of employer, issues raised— such as discrimination or refusal to reinstate—outcome, and total time to resolve. We found that the number of federal sector complaints shown in DOL’s USERRA database from February 8, 2005, through September 30, 2006, exceeded the number of unique claims it processed during the period of our review. Duplicate, reopened, and transferred complaints accounted for most of this difference. Also, in our review of a random sample of case files, we found the dates recorded for case closure in DOL’s USERRA database did not reflect the dates on the closure letters in 22 of 52 sampled complaints and the closed code, which DOL uses to describe the outcomes of USERRA complaints (e.g., granted, settled, no merit, or withdrawn), was not sufficiently reliable for reporting specific outcomes of complaints. To ensure that accurate information on USERRA complaints’ processing is available to DOL and to Congress, we recommended in our July 2007 report that the Secretary of Labor direct the Assistant Secretary of Veterans’ Employment and Training to establish a plan of intended actions with target dates for implementing internal controls to ensure that DOL’s USERRA database accurately reflects the number of unique USERRA complaints filed annually against federal executive branch agencies, the dates those complaints were closed, and the outcomes of those complaints. In response to our recommendation, DOL issued a memo from the Assistant Secretary of Veterans’ Employment and Training in July 2007 instructing investigators to ensure that the closed date entered into DOL’s USERRA database match the date on the closure letter to the servicemember, and DOL conducted mandatory training on this memo beginning in August 2007. Further, DOL officials told us that DOL’s fiscal year 2007 annual report will count reopened complaints as a single complaint if brought by the same individual, against the same employer, and on the same issue. We reported in July 2007 that in cases where servicemembers sought assistance from DOL and the agency could not resolve the complaints, DOL did not consistently notify servicemembers in writing of their right to have their unresolved complaints against federal executive branch agencies referred to OSC or to bring their claims directly to the Merit Systems Protection Board. Specifically, our review of a random sample of complaint files showed that DOL failed to notify servicemembers in writing in half of the unresolved complaints and notified others of only some of their options. In addition, we found that DOL’s USERRA Operations Manual failed to provide clear guidance to its investigators on when to notify servicemembers of their rights and the content of the notifications. In July 2007, we also reported that DOL has no internal process to routinely review investigators’ determinations before claimants are notified of them and noted that this lack of review could have caused DOL’s inconsistent practice of notifying servicemembers for their rights to referral. We recommended that the Secretary of Labor direct the Assistant Secretary for Veterans’ Employment and Training to (1) require VETS’s investigators to undergo mandatory training on the procedures to be followed concerning notification of rights to referral, (2) incorporate into the formal update to DOL’s USERRA Operations Manual guidance concerning the notification rights to referral, and (3) develop and implement an internal review mechanism for all unresolved complaints before servicemembers are notified of determinations and complaints are closed. Since that time, DOL has taken the following actions: issued a memo in July 2007 from the Assistant Secretary for Veterans’ Employment and Training to regional administrators, senior investigators, and directors concerning case closing procedure changes, including standard language to use to ensure that servicemembers (federal and nonfederal) are apprised of their rights; began conducting mandatory training on the memo in August 2007; incorporated the policy changes into the revised Manual, which according to DOL officials is expected to be released in January 2008; and according to DOL officials, beginning in January 2008, all claims are to be reviewed before the closure letter is sent to the claimant. These are positive steps. It is important for DOL to follow through with its plans to ensure that clear and uniform guidance is available to all involved in processing USERRA complaints. Mr. Chairman, Senator Enzi, and Members of the Committee, this concludes our remarks. We will be pleased to take questions at this time. For further information regarding this statement, please contact Brenda Farrell at 202-512-3604 or [email protected] or George Stalcup at 202-512- 9490 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Individuals making contributions to this testimony include Laura Durland, Assistant Director; Belva Martin, Assistant Director; James Ashley; Karin Fangman; K. Nicole Harms; Kenya Jones; Mae Jones; Ronald La Due Lake; Joseph Rutecki; Tamara F. Stenzel; and Kiki Theodoropoulos. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Since September 11, 2001, the Department of Defense (DOD) has mobilized more than 500,000 National Guard and Reserve members. As reservists return to civilian life, concerns exist about difficulties with their civilian employment. The Uniformed Services Employment and Reemployment Rights Act (USERRA) of 1994 protects the employment rights of individuals, largely National Guard and Reserve members, as they transition back to their civilian employment. GAO has issued a number of reports on agency efforts to carry out their USERRA responsibilities. DOD, the Department of Labor (DOL), the Department of Justice (DOJ), and the Office of Special Counsel (OSC) have key responsibilities under the act. GAO was asked to discuss the overall process that the agencies use to implement USERRA. Specifically, this testimony addresses (1) organizational accountability in the implementation of USERRA and (2) actions that the agencies have taken to improve their processes to implement USERRA. For this testimony, GAO drew from its most recent reports on USERRA. USERRA defines individual agency roles and responsibilities; however, it does not designate any single individual or office as accountable for maintaining visibility over the entire complaint resolution process. From the time informal complaints are filed with DOD's Employer Support of the Guard and Reserve through final resolution of formal complaints at DOL, DOJ, or OSC, no one entity has visibility over the entire process. The four agencies have generally been responsive to GAO's recommendations to improve the implementation of USERRA--on outreach to employers, data sharing and trend information, reporting to Congress, and the internal review of DOL's investigators' determinations of USERRA claims.
The Nuclear Waste Policy Act of 1982 directed DOE to identify and recommend to the President three sites for detailed investigation as a potential permanent repository for nuclear waste. In May 1986, the President selected three candidate sites, including Yucca Mountain, Nevada. However, faced with escalating costs and public resistance to the disposal program, in December 1987 the Congress amended the act by, among other actions, directing DOE to investigate only the Yucca Mountain site. Before the Congress enacted the 1987 amendments, DOE/OCRWM had decided that a successful disposal program could best be ensured if DOE had a long-term partnership with a management contractor. DOE expected that the proposed management contractor would develop waste storage and transportation capabilities and manage the investigation of candidate repository sites. DOE also expected the number of contractors on the program to decline by transferring the work of some existing contractors to the management contractor. In December 1988, DOE selected a team of contractors—headed by Bechtel Systems Management, Inc. and including SAIC—as the disposal program’s management contractor. However, TRW Environmental Safety Systems, Inc. (TRW) asserted, in a bid protest, the existence of a serious conflict of interest by DOE’s chairman of the contract’s Source Evaluation Board, a previous SAIC employee. In an August 24, 1989, decision on the bid protest, the court agreed, stating that DOE could award the contract to TRW or cancel the procurement action. (See app. I and II.) In February 1991, DOE awarded TRW a 10-year management contract for an estimated $1 billion to perform systems engineering, development, and management of a system to transport and permanently dispose of highly radioactive waste. Even though there were strong indications that relationships between DOE employees and contractor employees might result in ethical problems, OCRWM officials failed to diligently monitor such relationships. The two most senior DOE officials in OCRWM’s Yucca Mountain Project at the time—the Project Manager (1987-Oct. 1993) and the Deputy Project Manager (Oct. 1990-Jan. 1994)—had personal relationships with contractor employees that violated Executive Order 12674 and DOE regulations by creating at least the appearance of a loss of impartiality. For example, this Project Manager opposed the transition of work from SAIC to the management contractor, TRW, including the work performed by the SAIC official with whom he had a personal relationship. Additional relationships between DOE and contractor employees involved almost 18 percent of DOE’s employees at the project. “. . . avoid any action, whether or not specifically prohibited by the regulations, which might result in, or create the appearance of: (1) using public office for private gain; (2) giving preferential treatment to any person; (3) impeding government efficiency or economy; (4) losing complete independence or impartiality; (5) making a government decision outside official channels; or (6) affecting adversely the confidence of the public in the integrity of the government.” DOE’s Manager for the Yucca Mountain Project from 1987 to 1993 had a personal relationship with a female official of a major project contractor, SAIC. Our investigation and an April 1995 report by the DOE Office of Inspector General (OIG) concluded that because of this relationship, the Project Manager, as the Fee-Determining Official and the Contracting Officer’s Technical Representative for the SAIC contract, had lost the appearance of impartiality in the performance of his official duties, contrary to regulations regarding the ethical conduct of employees. Our investigation and the OIG report disclosed that the Project Manager and the SAIC official had traveled together frequently on official business (over 60 trips in fiscal years 1992 and 1993). Some of these trips involved little apparent business-related justification for the SAIC official, according to one of the Project Manager’s supervisors. Despite denials of anything other than a professional relationship, the officials’ public behavior repeatedly caused DOE, SAIC, and industry officials to raise concerns. According to the DOE Yucca Mountain Project Special Assistant for Institutional Affairs, the SAIC official functioned primarily as an administrative assistant to the Project Manager, rather than reporting to the Special Assistant as called for within the Yucca Mountain Project organizational structure. One of the Project Manager’s supervisors told us she was astonished to find that an SAIC official, while on official trips with the Yucca Mountain Project Manager, would do trivial tasks while her staff went unsupervised. The Project Manager opposed having several SAIC functions—among them the institutional and external affairs functions headed by the SAIC official—transitioned to TRW, the management contractor. He communicated that opposition to individuals who either were in a position to influence or participated in the decision not to transition certain functions, including that for which the SAIC official was responsible. According to SAIC lawyers, if the work had transitioned to TRW as planned, any SAIC employees forced to leave the company would have lost substantial pension and stock/stock option benefits and may have incurred tax liabilities arising from the forced sale of their SAIC stock. The Yucca Mountain Project Manager’s opposition to the transition of SAIC work to TRW put him in direct conflict with OCRWM’s then Director (Apr. 1990-Jan. 1993) and then Deputy Director (Nov. 1988-Oct. 1993). According to this former OCRWM Director, the Project Manager took SAIC’s side in its dispute with OCRWM management over transitioning SAIC work to TRW. The OCRWM Director also told us that he wanted the Project Manager to implement the management contract with TRW; and although the Project Manager never said no, he delayed repeatedly. The OCRWM Director stated that he did not recognize some of these problems until the end of his tenure as Director. Although OCRWM and Yucca Mountain Project officials had reason to be concerned about the relationship between the Yucca Mountain Project Manager and the SAIC official by 1991 or earlier, they took no formal action regarding the relationship until late 1993. In 1990 or 1991, an industry official expressed concern to the then OCRWM Deputy Director about the relationship between the Project Manager and the SAIC official. The Deputy Director took no action other than warning the Project Manager that he was traveling too much with the SAIC official. In 1990 or 1991, the DOE Director of Public Affairs for the Yucca Mountain Project Office cautioned the Project Manager about an appearance problem. Although the Director of Public Affairs stated that he had discussed this with OCRWM’s then Deputy Director, no action was taken, such as reporting this to the DOE OIG. In April 1993, OCRWM’s Deputy Director, based on his observations, cautioned the Project Manager. Further, although the then DOE Associate Director for Geologic Disposal, based in Las Vegas, Nevada, became aware of rumors about the relationship in June 1993, no investigation of the relationship was undertaken. During this time, the Project Manager disregarded the warnings he had received. In mid-September 1993, the Project Manager and the SAIC official engaged in a public altercation at the Phoenix, Arizona, airport. Shortly after that incident, the then Acting Director of OCRWM (Jan. 1993-Oct. 1993) requested that the DOE OIG evaluate the relationship between the Project Manager and the SAIC official. On September 27, 1993, the Project Manager was removed from professional contact with the SAIC official and directed to meet with DOE counsel to discuss the relationship. Because the Project Manager told the counsel that he and the SAIC official were “only good friends,” the counsel concluded a recusal was not necessary. The counsel did, however, suggest to the Project Manager that he contact a DOE ethics counselor at headquarters for advice and counsel, which he never did. In October 1993, DOE took further action, removing the Project Manager from his position and detailing him to another DOE site. He was subsequently reassigned to the DOE Nevada Operations Office at a reduced grade. The Deputy Project Manager from 1990 to 1994 had a personal relationship with a female SAIC employee, beginning in 1984 when the deputy was a Yucca Mountain Branch Chief. Even though this open relationship was public knowledge as early as 1986, no action was taken to ensure that the relationship did not violate federal standards of conduct until 1991. DOE acted again in 1993 and January 1994, shortly after a report of the relationship was aired nationally on the McNeil/Lehrer News Hour. During the Deputy Project Manager’s relationship, the previously discussed Project Manager did not act on his deputy’s potential ethical problem. However, the deputy did execute a recusal in 1991 to meet a condition of his associate’s employment by a prospective employer. His associate was seeking a job with the project’s management contractor, TRW; and TRW had requested assurances of the Deputy Project Manager’s impartiality. Despite a DOE general counsel’s statement to him that there was no need for the recusal that the then Acting OCRWM Director had suggested, the deputy recused himself. His recusal removed him from decisions regarding the transition of work from SAIC to TRW; TRW’s contract award fee evaluation; and any decisions regarding his associate’s salary, bonuses, and benefits. A subsequent August 1993 recusal somewhat broadened these areas with regard to his associate’s position with TRW. However, in early 1994, the newly appointed Project Manager raised concerns about the adequacy of the 1993 recusal with regard to the expanded duties that he envisioned for the deputy position. The project’s newly appointed Chief Counsel/ethics officer determined that the recusal was not sufficient to ensure the deputy’s impartiality in the new duties. Thus, in late January 1994, the new Yucca Mountain Project Manager placed the Deputy Project Manager in a senior advisory position for which DOE deemed the recusal was sufficient. The former deputy retired in late 1994. Days before the September 1993 public incident involving the Project Manager and the SAIC official, OCRWM began to enforce DOE’s ethics regulations more actively. In doing so, it exposed a number of other relationships between DOE and contractor employees that posed potential ethical problems. In September 1993, the then Acting Director of OCRWM issued a memorandum entitled, “Ethics Requirements, Federal-Contractor Employee Relationships.” All OCRWM employees were required to sign and date the memorandum, indicating that they were aware of their responsibilities. By mid-1994, an internal memorandum by the Yucca Mountain Project Chief Counsel listed 14 relationships between DOE employees and employees of several contractors that might have posed the appearance of the lack of impartiality and independence. These were in addition to the previously discussed relationships of the Project Manager and Deputy Manager and represented almost 18 percent of the 80 DOE Yucca Mountain Project employees. Upon examination, the Chief Counsel determined that four of these relationships required a recusal or waiver. The others were told that if they had any changes in positions or responsibilities, their cases would require a reexamination. The former Yucca Mountain Project Manager took other questionable actions while in that position. Specifically, he precipitated SAIC’s hiring of a project subcontractor, Integrated Resources Group (IRG), primarily because of IRG’s political connections that could provide him an opportunity to promote his positions, which were contrary to those of DOE. With those connections, the Project Manager went outside official channels to lobby the Congress for his concept of how the project should be run and funded. Further, the Project Manager’s lobbying activities included his improper attendance at a meeting with congressional and contractor officials to discuss the project’s future. The Project Manager disagreed with the information that OCRWM’s Directors were conveying to the Congress and the Secretary of Energy about the Yucca Mountain Project. He was concerned that the Secretary of Energy did not consider the waste program a major priority and that OCRWM’s then Acting Director (Nov. 1988-Mar. 1990) was not effective in communicating the progress being made on the project. The Project Manager also believed that opponents of the project were very effective in implying that the project was making little advancement. He encouraged project contractors to convey to the Congress and the Secretary of Energy the improvements that were being made on the project. Further, the Project Manager opposed the project’s management contract with TRW. Under the contract, SAIC, with whose official the Project Manager had a personal relationship, would have relinquished much of its work. According to OCRWM’s subsequent Director (Apr. 1990-Jan. 1993), the Yucca Mountain Project Manager did not think that the OCRWM directorate knew what was best for the project. The Project Manager, according to this OCRWM Director, wanted to run the program, independent of Washington. The Project Manager’s desire to be the OCRWM director became a point of contention between the Project Manager and his then immediate supervisor, the OCRWM Deputy Director (Nov. 1988-Oct. 1993). According to this Deputy Director, he told the Project Manager several times to stop “seeking the OCRWM directorship.” The then OCRWM Director (Apr. 1990-Jan. 1993) said that the Project Manager would come to Washington just to lobby the Congress for himself and other things of interest to him. In early 1990, the Yucca Mountain Project Manager saw an opportunity to provide the Congress his perspective on the Yucca Mountain Project when he was approached by the president of IRG, a management consulting company, about doing technical work in the project. IRG’s president promoted his political connections, and the Project Manager said that the IRG’s involvement would be in the best interest of the project. After the Project Manager determined that the IRG president did have political connections, he referred the individual to SAIC officials and encouraged them to hire IRG as a subcontractor. SAIC’s initial contract award to IRG—to evaluate project training requirements relative to the Nuclear Regulatory Commission’s licensing process—was made in March 1990 for $15,000. The SAIC Assistant Vice President responsible for licensing support activities, including work that was to be subcontracted to IRG, told us he doubted that SAIC would have contracted with IRG had it not been for the political contacts of IRG’s president and the Project Manager’s desire to have IRG in the project. He said that when SAIC considered IRG for a subcontract, it looked at IRG’s corporate capabilities, i.e., IRG had considerable expertise in nuclear facility licensing support and regulatory commitment tracking systems. He added, however, that the Project Manager’s expressed desire was the motivation behind SAIC’s consideration of IRG and except for that expressed desire, SAIC probably would not have subcontracted the work. Another SAIC official recalled clear direction from the Project Manager to SAIC that, if it was procedurally and legally possible, he wanted IRG in the project. Further, once IRG was under contract to SAIC, as IRG’s president told us, he became a direct congressional contact for the Project Manager. IRG’s president also told us that he believed his efforts, and those of SAIC’s hired lobbyists, were instrumental in bringing about a high-level DOE review of the management contract’s transition plan. As we reported in December 1994, DOE deferred transferring some SAIC work addressed in the plan until after a June 1993 performance assessment of SAIC. Once the assessment was performed, none of the assessed work was transferred from SAIC to TRW. SAIC awarded a second subcontract in July 1990 to IRG for over $224,000 after receiving consent from a DOE Contracting Officer pursuant to F.A.R. part 44. That part prescribes policies and procedures for consent to subcontract. “Consent to subcontract” is defined at 44.101 as the Contracting Officer’s written consent for the prime contractor to enter into a particular subcontract. In a May 30, 1990, letter, SAIC originally requested DOE’s consent to add a $185,000 amendment to IRG’s March 1990 subcontract for $15,000. According to a Yucca Mountain Project Contracting Officer in 1994, such a request was “irregular,” stating that any modification over 20 percent of a contract’s value is of “concern” according to the Competition in Contracting Act. DOE apparently never acted on SAIC’s request. In early July 1990, SAIC requested bids from the two predetermined firms that had bid on the March 1990 contract—IRG and a larger business in which SAIC held a 49-percent interest and whose unsalaried Chief Financial Officer at the time was an SAIC official in contracting. On July 12, 1990, SAIC requested by letter that DOE approve its decision to award the second time and materials subcontract to IRG as the low bidder for $224,450. In that letter, SAIC advised the Contracting Officer that only two firms had been solicited, largely to perform regulatory compliance strategy reviews and to develop/present related training at the project but also to recommend methods for successful interaction with various entities, including the Congress. On July 13, 1990, the DOE Contracting Officer approved the subcontract award. In determining whether to consent to a subcontract award on a time-and-materials basis, the Contracting Officer must exercise particularly careful and thorough consideration of several factors, including whether the contractor has a sound basis for selecting and determining the responsibility of the proposed subcontractor. (F.A.R. 44.202(a)(7)) Further, the “Competition in Subcontracting” clause at F.A.R. 52.244-5, which provides that contractors must select subcontractors on a competitive basis to the maximum extent practical and consider the objectives and requirements of each contract, was in SAIC’s contract. Although the second subcontract called for different services and the resulting amount of the award was significantly higher than that of the first subcontract, the Contracting Officer apparently did not object to SAIC’s method of competition. However, according to the project’s Chief Counsel, it was highly unusual for SAIC to have only two companies bid for the work that was subcontracted to IRG. The work was not very specialized, and a large pool of companies could have been considered. To have solicited only two bids, she said, defeats the purpose of competition to get the best price for the government. In April 1992, the DOE Yucca Mountain Project Manager engaged in lobbying activities outside proper official channels by attending a meeting that included congressional officials and representatives from SAIC and IRG to discuss the project’s future. The meeting—for which IRG’s president told us he was the catalyst—breached DOE policy on congressional contacts by senior DOE officials because the Project Manager did not obtain prior Secretarial approval to attend the meeting and because the meeting was not carried out in accordance with the existing policy. Participants stated that discussions at the meeting included (1) future funding for the Yucca Mountain Project and (2) how the Congress could alter the way the project was funded. The evidence shows that the Project Manager argued that the project was substantially underfunded, needing additional funding to meet its scheduled completion date, and discussed how best to use that and other funding. According to the IRG president, he believed that he too was helpful in explaining how additional funding would be used at the project. The Project Manager also discussed removing the project from the annual budget appropriations process and going to an off-budget funding that would give DOE direct access to the Nuclear Waste Fund, financed by the owners and generators of nuclear waste. This latter proposal would have required legislation to accomplish. The then Secretary of Energy told us that this meeting was a breach of DOE policy for interacting with Members of Congress and was unethical on the Project Manager’s part. The meeting was neither coordinated with DOE officials beforehand nor carried out according to the existing policy. When the Secretary learned after the fact that SAIC representatives had been present at the meeting, he was concerned because of the previously discussed corporate struggle over project work that was taking place between SAIC and the OCRWM management contractor, TRW. According to the former Secretary, the Project Manager acknowledged that he should have left the meeting when he saw who was there. The current Director, OCRWM; Deputy Director, OCRWM; and other DOE officials provided us their comments on a draft of this report. They were in general agreement with the contents of the draft but expressed concern that, with the draft’s identification of DOE officials by title alone, readers may incorrectly attribute the actions discussed to previous or subsequent officeholders. To address that overall concern, we have included in the report’s text the dates during which the respective individuals held office. (See also app. II.) In addition, where appropriate, we have clarified sections for which the officials provided additional details. We conducted this inquiry between May 1994 and April 1996 at several locations including the DOE/Office of Civilian Radioactive Waste Management, Washington, D.C.; DOE/Yucca Mountain Project Office and Nevada Operations Office, Las Vegas, Nevada; SAIC Corporate Headquarters, LaJolla, California, and SAIC, Las Vegas, Nevada; and IRG, Metairie, Louisiana, and Las Vegas, Nevada. We interviewed current and former DOE officials and staff and current SAIC and IRG officials. We reviewed DOE, SAIC, and IRG contract files, including solicitations for bids, evaluations of proposals, contractual scopes of work, and contract awards; IRG time and expense reports, and SAIC management and support services charges to DOE; documentary materials regarding the award and implementation of the OCRWM management and operating contract; and federal law and regulation regarding conflicts of interest and lobbying activities. In the course of our investigation, we coordinated with the DOE OIG. We will provide the OIG a copy of this report. As arranged with your office, unless you announce its contents earlier, we plan no further distribution of this report until 30 days after the date of the letter. At that time, we will send copies of the report to interested congressional committees and the Secretary of Energy. We will also make copies available to others on request. If you have further questions or concerns, please contact me at (202) 512-6722. Major contributors are listed in appendix III. An ethical problem surfaced in 1987 at the highest levels of OCRWM management: A conflict of interest by OCRWM’s chairman of the Source Evaluation Board for a Yucca Mountain management contract severely undermined OCRWM’s effort to award the contract in a timely manner. The board chairman, after returning to DOE from private industry, did not, as initially instructed by DOE, recuse himself from participation as a supervisory employee in certain DOE actions involving SAIC. This resulted in a bid protest and subsequent set-aside of the contract award. The board chairman also served as OCRWM’s Acting Director from November 1988 to March 1990. The chairman of the Source Evaluation Board for the Yucca Mountain contract, a longtime DOE employee, left the agency in about 1983 to work in private industry and returned to DOE on June 2, 1986. One employer while he was in the private sector was SAIC. Immediately prior to his return to DOE and while still in SAIC’s employ, DOE’s Office of General Counsel advised him by letter that for 1 year after returning to DOE he could not participate as a supervisory employee in any DOE action in which SAIC was substantially, directly, or materially involved. However, DOE’s Office of General Counsel subsequently prepared an interoffice memorandum which concluded that its earlier advice was in error. The individual had become chairman of the Source Evaluation Board for the OCRWM management contract on May 1, 1987, which was about 1 month before the restriction was to expire. In December 1988, DOE selected Bechtel Systems Management, Inc., which had teamed with SAIC and other companies, as the management contractor. Shortly thereafter, TRW, an unsuccessful bidder, filed a bid protest and motion to enjoin DOE from awarding the contract to Bechtel. These were based, in part, on allegations that the chairman of the Source Evaluation Board had violated the DOE Reorganization Act’s conflict-of-interest provision at 42 U.S.C. 7216 by participating in a procurement that involved a previous employer within 1 year of joining DOE. That provision prohibits a supervisory employee for 1 year from participating in any DOE proceeding in which his former employer is substantially, directly, or materially involved. In August 1989, the Claims Court held that the board chairman/ Acting Director had violated 42 U.S.C. 7216 by participating in the procurement involving SAIC. In its decision, the court rejected DOE’s pre-hearing attempt to reverse its first instruction. It said, “ne might reasonably have expected that , out of an abundance of caution, would have recused himself in any matter in which SAIC was involved during the restricted period. Unfortunately, such did not occur. . . .” (TRW Envtl. Safety Sys., Inc. v. United States, 18 Cl. Ct. 33, 63 (1989)). TRW, therefore, was granted its motion for a permanent injunction. The court ruled that DOE could not award the contract to any original bidder other than TRW. DOE awarded the management contract to TRW in February 1991. Barbara C. Coles, Senior Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO investigated allegations of conflicts of interest at the Department of Energy's (DOE) Yucca Mountain Project, focusing on whether: (1) the DOE Office of Civilian Radioactive Waste Management (OCRWM) properly implemented and adequately enforced federal standards of ethical conduct and DOE ethics regulations; and (2) failure to implement DOE ethics standards may have contributed to contract award and management abuses. GAO found that: (1) the Principles of Ethical Conduct for federal employees contained in Executive Order 12674 and DOE's regulations for ethical conduct by its employees prohibit, among other things, any action that might result in or create the appearance of the loss of impartiality or independence; (2) however, GAO's investigation and DOE's own reviews revealed the appearance of the loss of impartiality by DOE officials at the Yucca Mountain Project; (3) for example, both the Manager of DOE's Yucca Mountain Project from 1987 to October 1993 and the Deputy Manager from October 1990 to January 1994 had long-term personal relationships with personnel of major project contractors, including the Science Applications International Corporation (SAIC); (4) moreover, by 1994, DOE had learned that 14 additional, or almost 18 percent of, DOE employees at the project were engaged in relationships that might have created problems concerning the lack of impartiality and independence; (5) DOE determined that four of these relationships represented potential ethical problems, requiring recusal or waiver; (6) although senior OCRWM officials in Washington, D.C., knew by 1991 that potential ethical problems existed at the Yucca Mountain Project, they did not act to resolve the situation until late 1993; (7) further, GAO's investigation disclosed that this Yucca Mountain Project Manager had engaged in other questionable actions; (8) evidence shows that he encouraged SAIC to hire a certain subcontractor largely because of the subcontractor's stated political connections that could be used to promote the Project Manager's, as well as SAIC's, priorities for the project rather than DOE's priorities; (9) SAIC awarded a small subcontract to the firm after soliciting bids from it and a second firm in which SAIC held a major interest; (10) within a few months, and after soliciting bids from the same two firms, SAIC received DOE's consent to award a second contract, much larger in cost and different in scope, to the same subcontractor; (11) the Project Manager also violated DOE policy by improperly participating in a meeting with congressional and contractor officials, where he lobbied for his own positions concerning the project without, as required, first notifying his superiors.
The term “e-cigarettes” refers to a wide range of products that share the same basic design and generally consist of three main parts: a power source (typically a battery), a heating element containing a wick (to deliver liquid to the heating element), and a cartridge or tank containing liquid solution. Cartridges and liquid are often sold separately from e- cigarette devices containing the battery and heating element. Liquid typically contains nicotine, a solvent (e.g., propylene glycol, glycerin, or both), and flavorings. E-cigarettes heat liquids to deliver aerosol that usually contains nicotine and other chemical substances to the user by inhalation. E-cigarettes come in two main forms: Closed systems that include disposable e-cigarettes or require users to buy e-cigarette components, including the cartridge with liquid, from the same manufacturer or brand. Open systems that enable users to purchase the heating element, battery, tank, and liquid separately and from different manufacturers or brands. Industry experts we interviewed estimated that the size of the U.S. e- cigarette market in 2014 was about $2.5 billion. Although there are no definitive data on the relative proportions of imported and domestically manufactured e-cigarettes, industry experts we interviewed told us that the majority of e-cigarettes sold in the United States are imported from China. The U.S. e-cigarette market has developed rapidly in the last decade. U.S. Customs and Border Protection issued a customs ruling for the classification of e-cigarette imports to the United States as early as 2006. USPTO issued a registration for a trademark applied to e-cigarettes as early as May 2008 and had recorded more than 1,600 U.S. trademark registrations for e-cigarette devices, parts, liquid, and services as of March 2015. Hundreds of e-cigarette companies participate in the U.S. e- cigarette market. Large tobacco companies began entering the U.S. e- cigarette market in 2012 and now manufacture some of the leading closed system e-cigarette brands, according to industry experts we interviewed. Some industry experts we spoke with predict that the U.S. e- cigarette market will continue to grow, although factors such as the extent of federal and state regulation create uncertainty about the rate of growth. E-cigarettes are sold in multiple types of outlets, including traditional retail stores, such as convenience stores and grocery stores, as well as at “vape stores” and over the Internet. According to industry experts, closed system e-cigarette products are mainly sold in traditional retail outlets, while open system e-cigarette products are often sold online and at vape stores. Private companies collect point-of-sale data on the quantities and prices of e-cigarettes sold at traditional retail stores, according to documentation from these companies; however, these data do not cover online sales or “vape store” sales. Financial analysts from one firm estimate that 40 to 60 percent of e-cigarettes are sold online or at vape stores. In 2014, CDC reported a statistically significant increase in the percentage of U.S. adults who had used e-cigarettes in the preceding 30 days, from 1 percent in 2010 to 2.6 percent in 2013. Past-month e- cigarette use was especially prominent among current adult cigarette smokers and grew in this population, at a statistically significant level, from 4.9 percent in 2010 and 2011 to 9.4 percent in 2012 and 2013. Past- month e-cigarette use by former adult cigarette smokers also rose, from 1 percent to 1.3 percent during the same period, although the increase was not statistically significant. The National Youth Tobacco Survey by CDC and FDA showed a statistically significant increase in high school students’ past-month e- cigarette use, from 1.5 percent in 2011 to 13.4 percent in 2014. In addition, the survey found that in 2014, high school students’ past-month e-cigarette use surpassed their use of cigarettes and other tobacco products at a statistically significant level (see fig. 2). The survey further found a statistically significant increase in past-month e-cigarette use among middle school students. In April 2014, FDA issued a proposed rule to deem e-cigarettes and other products meeting the Tobacco Control Act’s definition of “tobacco product” to be subject to the agency’s regulation. FDA received more than 135,000 comments about the proposed deeming rule during the public comment period, which ended in August 2014. FDA announced its intent to issue the final rule in June 2015 in the spring 2015 semiannual regulatory agenda. The final rule had not been issued as of August 2015. The Tobacco Control Act aimed to, among other things, promote cessation to decrease health risks and social costs associated with tobacco-related diseases. According to the act, FDA can, by regulation, require restrictions on the sale, distribution, advertising, and promotion of a tobacco product if the agency determines that the proposed regulation is appropriate for the protection of public health, based on a consideration of the risks and benefits to the population as a whole, including users and nonusers of tobacco products. In the act, Congress recognized that virtually all new users of tobacco products are under the age of 18. In the proposed deeming rule, FDA stated that it was researching the effect of e-cigarette use on public health. FDA noted that e-cigarettes could have a positive net impact if using them resulted in minimal initiation by children and adolescents and in significant numbers of smokers’ quitting. The FDA also noted that e-cigarette use could have a negative net impact if it resulted in significant initiation by young people, minimal quitting, or significant dual use of combustible products, such as cigarettes, and noncombustible products, such as e-cigarettes. The IRC, which defines tobacco products subject to FET and sets rates of tax, does not specifically define or list a tax rate for e-cigarettes. However, two states—Minnesota and North Carolina—have imposed an excise tax on e-cigarettes or vapor products containing nicotine. The Minnesota Department of Revenue issued a notice in 2012 stating its position that e-cigarettes are subject to the tobacco products tax; the current tax rate is 95 percent of the wholesale price of the nicotine- containing liquid or, if the liquid cannot be sold separately, of the complete e-cigarette. North Carolina has taxed vapor products at 5 cents per milliliter of nicotine-containing liquid or other material since June 2015. In addition, at least 18 states and the District of Columbia have proposed legislation to tax e-cigarettes, vapor products, nicotine vapor products, or e-cigarette cartridges since 2013. For example, a bill in Maine proposed to include e-cigarettes in its definition of cigarettes and to apply the same tax rate to cigarettes and e-cigarettes, and a bill in Montana proposed a tax on vapor products, such as e-cigarettes, that would be partially based on the weight in milligrams of the nicotine present in the product. As of January 2015, three countries—Italy, Portugal, and South Korea— imposed national-level taxes on e-cigarettes that contain nicotine, and each of these countries applies its tax to nicotine-containing e-cigarette liquid, according to an industry expert. In addition, according to research by the Law Library of Congress, Serbia recently enacted legislation to introduce an excise tax on e-cigarette liquid, which went into effect in August 2015. Our analysis of Treasury data on cigarette FET revenue found no current evidence that e-cigarette use has affected the historical decreasing trend in FET collections over the past 6 years. We used a time series regression to determine the change in cigarette FET revenue from April 2009, when the last increase in FET on cigarettes and other tobacco products became effective, through December 2014. Variables in the model control for (1) historical decreases in cigarette FET revenue over the last 6 years; (2) quantities of cigars, pipe tobacco, and roll-your-own tobacco removed from domestic factories or released from customs custody for distribution in the United States; and (3) monthly seasonality effect. Our model tests for the inclusion of e-cigarettes at different points in time and tests for any significant changes from the historical trend. We found no significant evidence that e-cigarettes have decreased the collection of FET revenue from cigarettes at a rate greater than the 6-year historical trend. Specifically, we found that, when other variables in the model are held constant, the 6-year historical trend of cigarette FET revenue decreased at a rate between $4.4 million and $5.5 million per month (see fig. 3). However, we found no significant evidence of a decrease in FET revenue from cigarettes at a rate greater than the 6-year historical trend during the time frame when e-cigarettes have been on the U.S. market. We estimate that cigarette FET revenue would need to decrease by an additional $2 million to $3 million per month to signal a significant effect from e-cigarettes. Several factors may explain why our analysis did not detect an effect of e- cigarette use on cigarette FET revenue. First, the e-cigarette market— estimated at $2.5 billion in sales in 2014—is relatively small compared with the cigarette market, which had $80 billion in sales in the same year. As a result, without a substantial increase in the e-cigarette market, any effect on the cigarette market would be too minor to significantly affect cigarette FET revenue. Second, comprehensive and reliable data on e- cigarette sales and prices—which would enable us to corroborate the size of the e-cigarette market and accurately identify when it became significant—are not available. Third, comprehensive and reliable data about the extent to which e-cigarettes are used as substitutes for cigarettes are also not available. Without such data and information, estimating the effect of e-cigarette use on cigarette FET revenue will be difficult, even if the e-cigarette market continues to grow. How consumers’ use of e-cigarettes relates to their use of cigarettes— whether e-cigarettes are substitutes, complements, or unrelated—may determine any effect of e-cigarette use on cigarette FET revenue. The relationship between the use of e-cigarettes and cigarettes is currently unknown, according to public health officials. Table 1 describes these three possible relationships and summarizes their potential revenue effects. The most recent data from the National Youth Tobacco Survey by CDC and FDA showing high school students’ increasing use of e-cigarettes and decreasing use of cigarettes (see fig. 2), suggest that cigarette FET revenue could decline further if these trends continue. If the percentage of high school students using cigarettes continues to decline, and if other factors such as current levels of regulation remain constant, the number of cigarette smokers could dwindle further in the coming years as the current cohort of high school students ages. A continued decline in cigarette smoking among high school students—which could be due, in part, to increased use of e-cigarettes—would reduce cigarette FET revenue at a greater rate than the average historical trend. FDA and CDC are undertaking efforts that, over time, may enable them to better understand e-cigarettes’ relationship to cigarettes and other tobacco products, according to agency officials. For example, FDA and CDC are refining survey instruments that they use to measure adults’ and youths’ use of e-cigarettes, cigarettes, and other tobacco products, such as the National Health Interview Survey and the National Youth Tobacco Survey. In addition, FDA, in collaboration with the National Institutes of Health, is funding a longitudinal cohort study, the Population Assessment of Tobacco and Health, which asks detailed questions about adults’ and youths’ use of e-cigarettes, cigarettes, and other tobacco products. FDA officials said that they expect to receive the data from the first year of the study in the summer of 2015. Further, according to FDA and CDC officials, other national surveys, state-level surveys, results of National Institutes of Health and other studies currently under way, and, if available, e-cigarette quantity data could help researchers analyze trends and observe statistical relationships. Treasury and FDA do not collect data on quantities of e-cigarettes on the U.S. market, and we did not identify any other federal agencies that do so. However, Treasury collects data on quantities of domestically manufactured tobacco products that are subject to FET to ensure that the proper FET amount is paid. FDA collects data on quantities of tobacco products that it regulates under its tobacco product authorities to calculate user fees that fund FDA’s tobacco regulation activities. Treasury and FDA collect data on quantities for different sets of tobacco products because their authorities to regulate tobacco products stem from different statutes: Treasury’s authorities stem from the IRC. The IRC defines “tobacco products” as cigarettes, roll-your-own tobacco, smokeless tobacco, cigars, and pipe tobacco and sets FET rates for these products. The IRC defines each of these products as containing or consisting of tobacco. FDA’s tobacco product authorities stem from the Federal Food, Drug, and Cosmetic Act as amended by the Tobacco Control Act. The Tobacco Control Act defines “tobacco product,” in part, as any product made or derived from tobacco. The act granted FDA immediate authority over cigarettes, cigarette tobacco, roll-your-own tobacco, and smokeless tobacco. The act also gave FDA authority to deem by regulation any other product meeting the Tobacco Control Act’s definition of tobacco product to be subject to FDA’s tobacco product authorities. Under this authority, in April 2014 FDA proposed to deem additional products, including e-cigarettes, to be subject to its tobacco product regulation. Treasury collects data on quantities of cigarettes and other federally taxed tobacco products from domestic manufacturers of these products, but does not collect such data for e-cigarettes, because the IRC does not define or list a tax rate for e-cigarettes. According to Treasury officials, on the basis of definitions of the tobacco products enumerated in the IRC, Treasury’s ability to tax e-cigarettes—and collect data for them—depends on whether e-cigarettes contain tobacco. Treasury officials said that for e- cigarettes that do not contain tobacco, Treasury could not assert federal taxation and any related data collection by regulation; instead, such authority would require an act of Congress. As of August 2015, Treasury had not collected any FET or data associated with e-cigarettes, according to Treasury officials. FDA does not collect data on quantities of e-cigarettes sold on the U.S. market. FDA’s preliminary economic impact analysis accompanying the proposed deeming rule states that when the deemed products become subject to FDA’s tobacco product authorities, the agency can begin collecting data to determine the number of regulated entities and to monitor the number and type of unique products sold to the public. At present, FDA collects data on quantities of four tobacco products (cigarettes, cigarette tobacco, roll-your-own tobacco, and smokeless tobacco) that it regulates under its tobacco product authorities to apply the legally mandated method for allocating user fees among the domestic manufacturers and importers of those products. In July 2014, FDA stated that if additional products are deemed subject to its tobacco regulation, the agency would conduct a new rulemaking to make appropriate changes to the user fee regulation. FDA also stated that it recognized that the issue of whether it had authority to assess user fees on some deemed products was controversial and that it intended to solicit public comment to further explore issues related to user fee assessments on tobacco products that may be deemed subject to FDA’s tobacco product authorities. According to FDA officials, if e-cigarettes become subject to user fees, FDA would likely need data on quantities of e- cigarettes sold on the U.S. market, comparable to data that the agency collects for the four products currently subject to user fees. Table 2 summarizes information about Treasury’s and FDA’s collection of data on quantities of cigarettes, other tobacco products, and e-cigarettes. The Department of Labor’s Bureau of Labor Statistics (BLS) began collecting limited e-cigarette price information in September 2014 as part of its ongoing data collection for the Consumer Price Index. The Consumer Price Index provides monthly data on changes in the prices paid by urban consumers for a representative “basket” of goods and services. The index is divided into more than 200 categories representing the goods and services that an urban consumer might typically buy. BLS collects e-cigarette price information, under the category “tobacco products other than cigarettes,” for disposable e-cigarettes, starter kits, liquid refills, and e-cigarette replacement cartridges. These items may or may not contain nicotine and may have any flavor. According to BLS officials, the number of observations on e-cigarette prices is too small to calculate a reliable national average price or reliable state-level prices. According to the officials, U.S. consumers’ e-cigarette expenditures, while increasing, represent a small share of total expenditures in the representative basket of goods and services. Additionally, BLS officials explained that the Consumer Price Index sample for “tobacco products other than cigarettes” is refreshed over a 4- year cycle; the length of time it takes to fully replace samples causes Consumer Price Index sample shares (the percentage of the sample composed of the prices of a given product) to lag real-world percentages for items for which consumers’ expenditures are changing rapidly. The Consumer Price Index sample included 10 e-cigarette price observations as of June 2015 and, according to the BLS officials, will increase to 14 e- cigarette price observations by October 2015. BLS would require more resources in order to collect substantially more data on e-cigarettes, according to BLS officials. Our analysis shows no current effect of the growing e-cigarette market on FET revenue from cigarettes. Given the limited information about the e- cigarette market, it is difficult to accurately estimate this market’s size or analyze its potential effect on FET revenue from cigarettes and other tobacco products. The increased regulation of tobacco products at the federal and state level, among other things, has contributed to a decline in cigarette use and FET revenue. Recent CDC studies show that e- cigarette use has significantly increased among high school students, while cigarette use has significantly declined. As the regulation of e- cigarettes unfolds and the market develops, e-cigarette use patterns may change. Federal agencies’ efforts to develop a better understanding of the relationship between e-cigarette and cigarette use will help analysts and government officials develop a more complete understanding of the e-cigarette market and its effect on cigarette FET revenue. We provided a draft of this report to DOL, HHS, and Treasury. We also provided relevant portions to U.S. Customs and Border Protection and USPTO. We received technical comments from DOL, HHS, and Treasury and incorporated the comments as appropriate. We are sending copies of this report to the appropriate congressional committees; the Secretaries of Health and Human Services, Labor, and the Treasury; and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3149 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. This report examines the extent to which (1) use of electronic cigarettes (also known as e-cigarettes) affects federal excise tax (FET) revenue from cigarettes and (2) data on quantities and prices of e-cigarettes on the U.S. market are available from federal agencies. To address these objectives, we reviewed documents and interviewed officials from the Department of the Treasury’s (Treasury) Alcohol and Tobacco Tax and Trade Bureau and Treasury’s Office of Tax Analysis, the Food and Drug Administration (FDA), the Centers for Disease Control and Prevention (CDC), the U.S. Bureau of Labor Statistics (BLS), and the U.S. Patent and Trademark Office (USPTO) to obtain information and views about e-cigarette and tobacco sales and revenue trends and regulation. We determined the reliability of USPTO e-cigarette trademark registration data by interviewing cognizant USPTO officials. We also interviewed industry experts, including e-cigarette industry members, tobacco industry members, financial analysts, researchers, and representatives of public health organizations. We interviewed organizations and companies that represent a range of perspectives. We spoke with industry associations that represent small and midsized e- cigarette companies; we also spoke with representatives of leading companies that produce e-cigarettes, as measured by dollar share from available data, including an independent e-cigarette company and tobacco companies. The views expressed by these representatives are not generalizable and do not represent the views of the entire e-cigarette industry. We also attended an e-cigarette industry conference as well as three FDA public workshops featuring current research on e-cigarette product science and implications of e-cigarette use for individual health and population health. To determine whether e-cigarette use affects cigarette FET revenue, we examined cigarette FET revenue from April 2009 through December 2014. For this analysis, we used monthly data obtained from Treasury on FET revenue from cigarettes removed from domestic factories or released from customs custody for distribution in the United States. In addition, using these removals data and testimonial evidence, we constructed a multivariate model that estimates the effect of e-cigarette use on cigarette FET revenue. In particular, we regressed cigarette FET revenue on a number of variables, including other tobacco products, a trend, presence of e-cigarettes on the market, and seasonality. We assessed the reliability of the data by checking the data for inconsistency errors and for completeness. We determined that the cigarette removals data were sufficiently reliable for the purposes of this report. See appendix II for more explanation of our analysis. To examine the extent to which data on quantities and prices for e- cigarettes on the U.S. market are available from federal agencies, we interviewed cognizant officials from Treasury, FDA, CDC, BLS, and the Congressional Budget Office, as well as industry experts. To describe Treasury’s collection of data on quantities of federally taxed tobacco products, we reviewed documents and interviewed officials from Treasury’s Alcohol and Tobacco Tax and Trade Bureau. To describe FDA’s collection of data on quantities of tobacco products regulated by the agency, we examined FDA’s regulatory actions, including its April 2014 proposed rule to deem additional products, including e-cigarettes, to be subject to the agency’s tobacco product authorities, and the July 2014 final user fee rule, and we interviewed cognizant FDA officials. To describe BLS’s collection of data on e-cigarette prices, we reviewed documents and interviewed BLS officials. We conducted this performance audit from September 2014 to September 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We constructed a multivariate model to estimate the effect of electronic cigarette (e-cigarette) use on federal excise tax (FET) revenue from traditional cigarettes. The model uses monthly data obtained from the Department of the Treasury’s (Treasury) Alcohol and Tobacco Tax and Trade Bureau and controls for a 6-year historical trend in cigarette FET revenue, from April 2009 through December 2014; the presence of other tobacco products (cigars, roll-your-own tobacco, and pipe tobacco); and seasonality effects. The model also includes a time variable that tests for the presence of e-cigarettes. We used five different dates during the period January 2012 to October 2013 for the time variable, and we estimated regressions for each date. In particular, the model uses the following equation: and imported cigarettes collected in period t; 𝐶𝐶𝐶𝐶𝐶𝐶_𝑟𝑟𝑟𝑟𝑟𝑟𝑡𝑡=𝛼𝛼+𝛽𝛽𝑐𝑐𝐶𝐶𝐶𝐶𝑐𝑐𝑟𝑟𝑐𝑐𝑡𝑡+𝛾𝛾𝑟𝑟𝛾𝛾𝛾𝛾_𝑝𝑝𝐶𝐶𝑝𝑝𝑟𝑟𝑡𝑡+𝛿𝛿𝑡𝑡𝑟𝑟𝑟𝑟𝑡𝑡𝑡𝑡𝑡𝑡+𝜃𝜃𝑟𝑟𝑐𝑐𝐶𝐶𝐶𝐶_𝑐𝑐𝑠𝑠𝛾𝛾𝑝𝑝𝑟𝑟𝑡𝑡 𝐶𝐶𝐶𝐶𝐶𝐶_𝑟𝑟𝑟𝑟𝑟𝑟𝑡𝑡= the amount of FET revenue, in nominal dollars, from domestic where 𝛼𝛼 = an intercept; 𝑐𝑐𝐶𝐶𝐶𝐶𝑐𝑐𝑟𝑟𝑐𝑐𝑡𝑡 = the sum of small and large cigar removals in number of sticks 𝑟𝑟𝛾𝛾𝛾𝛾_𝑝𝑝𝐶𝐶𝑝𝑝𝑟𝑟𝑡𝑡 = the sum of roll-your-own and pipe tobacco removals in in period t; 𝑡𝑡𝑟𝑟𝑟𝑟𝑡𝑡𝑡𝑡𝑡𝑡 = a monthly trend that controls for the historical changes in pounds in period t; 𝑟𝑟𝑐𝑐𝐶𝐶𝐶𝐶_𝑐𝑐𝑠𝑠𝛾𝛾𝑝𝑝𝑟𝑟𝑡𝑡 = a dummy variable that equals one for each month on or cigarette revenue at period t; after the date that indicates the presence of e-cigarettes in the market (because there is no clear indicator of this presence, we selected five different dates to indicate the beginning of this presence); monthly seasonality, with June as the reference month; and 𝑚𝑚𝛾𝛾𝑡𝑡𝑡𝑡ℎ𝑠𝑠𝛾𝛾_𝐶𝐶𝑡𝑡𝑡𝑡𝐶𝐶𝑐𝑐𝑐𝑐𝑡𝑡𝛾𝛾𝑟𝑟𝑐𝑐𝑡𝑡 = a set of eleven dummy variables controlling for µ𝑡𝑡 = an error term assumed to be heteroskedastic and possibly autocorrelated. David Gootnick, (202) 512-3149 or [email protected]. In addition to the contact named above, Christine Broderick (Assistant Director), Christina Werth, Sada Aksartova, Pedro Almoguera, Grace Lui, and Srinidhi Vijaykumar made key contributions to this report. In addition, Tina Cheng and Reid Lowe provided technical assistance.
While use of traditional cigarettes in the United States continues to decline, use of e-cigarettes is increasing. Treasury collects FET on cigarettes and other tobacco products manufactured in the United States. The Internal Revenue Code of 1986, as amended, does not specifically define or list a tax rate for e-cigarettes. The decline in cigarette use has led to a decline in cigarette FET revenue, from $15.3 billion in fiscal year 2010 to $13.2 billion in fiscal year 2014. FDA currently regulates four tobacco products. In April 2014, FDA proposed to deem additional tobacco products, including e-cigarettes, subject to its tobacco product authorities. GAO was asked to examine issues related to the U.S. e-cigarette market. This report examines the extent to which (1) e-cigarette use affects cigarette FET revenue, and (2) data on quantities and prices of e-cigarettes on the U.S. market are available from federal agencies. GAO conducted a regression analysis to assess the effect of e-cigarette use on cigarette FET revenue from April 2009 through December 2014, using Treasury data on FET revenue from cigarettes. GAO also reviewed agency documents and interviewed agency officials and industry experts. GAO's analysis found no evidence that use of electronic cigarettes (e-cigarettes) has affected federal excise tax (FET) revenue from traditional cigarettes, which has been declining over time (see figure). Possible reasons for the lack of a detectable effect include the small size of the e-cigarette market (estimated at $2.5 billion in 2014) relative to the cigarette market ($80 billion in the same year); lack of comprehensive and reliable data on e-cigarette quantities and prices; and lack of comprehensive and reliable information about the extent to which e-cigarettes substitute for cigarettes. If users consume e-cigarettes instead of cigarettes, cigarette FET revenue would decline as fewer cigarettes are consumed. Data from a recent survey by the Centers for Disease Control and Prevention showing high school students' increasing use of e-cigarettes and decreasing use of cigarettes suggest that these students may substitute e-cigarettes for cigarettes to some extent. If the percentage of high school students using cigarettes continues to decline, cigarette FET revenue could also decrease at a greater rate than the average historic trend observed since April 2009, when FET on cigarettes and other tobacco products was last increased. Comprehensive data on e-cigarette quantities and prices are not available from federal agencies. The Department of the Treasury (Treasury) and Food and Drug Administration (FDA) do not collect data on e-cigarette quantities comparable to data that they collect for cigarettes and some other tobacco products. According to FDA officials, if e-cigarettes are deemed subject to FDA's tobacco product authorities as a result of a rule proposed in April 2014, the agency could start collecting some data on the types of e-cigarettes on the U.S. market but will not collect data on the quantities of e-cigarettes sold. The Bureau of Labor Statistics began collecting data on e-cigarette prices in September 2014 as part of its data collection for the Consumer Price Index, but these data are limited. GAO is not making recommendations in this report.
Our analysis of FDIC data showed that while the profitability of most minority banks with assets greater than $100 million nearly equaled the profitability of all similarly sized banks (peers), the profitability of smaller minority banks and African-American banks of all sizes did not. Profitability is commonly measured by return on assets (ROA), or the ratio of profits to assets, and ROAs are typically compared across peer groups to assess performance. Many small minority banks (those with less than $100 million in assets) had ROAs that were substantially lower than those of their peer groups in 2005 as well as in 1995 and 2000. Moreover, African- American banks of all sizes had ROAs that were significantly below those of their peers in 2005 as well as in 1995 and 2000 (African-American banks of all sizes and other small minority banks account for about half of all minority banks). Our analysis of FDIC data identified some possible explanations for the relatively low profitability of some small minority banks and African-American banks, such as relatively higher reserves for potential loan losses and administrative expenses and competition from larger banks. Nevertheless, the majority of officials from banks across all minority groups were positive about their banks’ financial outlook, and many saw their minority status as an advantage in serving their communities (for example, in providing services in the language predominantly used by the minority community). The bank regulators have adopted differing approaches to supporting minority banks, and, at the time of our review, no agency had assessed the effectiveness of its efforts through regular and comprehensive surveys of minority banks or outcome-oriented performance measures. FDIC— which supervises more than half of all minority banks—had the most comprehensive program to support minority banks and led an interagency group that coordinates such efforts. Among other things, FDIC has designated officials in the agency’s headquarters and regional offices to be responsible for minority bank efforts, held periodic conferences for minority banks, and established formal policies for annual outreach to the banks it regulates to make them aware of available technical assistance. OTS also designated staff to be responsible for the agency’s efforts to support minority banks, developed outreach procedures, and focused its efforts on providing technical assistance. OCC and the Federal Reserve, while not required to do so by Section 308 of FIRREA, undertook some efforts to support minority banks, such as holding occasional conferences for Native American banks, and were planning additional efforts. FDIC proactively sought to assess the effectiveness of its support efforts; for example, it surveyed minority banks. However, these surveys did not address key activities, such as the provision of technical assistance, and the agency had not established outcome-oriented performance measures for its support efforts. Furthermore, none of the other regulators comprehensively surveyed minority banks on the effectiveness of their support efforts or established outcome-oriented performance measures. Consequently, the regulators were not well positioned to assess the results of their support efforts or identify areas for improvement. Our survey of minority banks identified potential limitations in the regulators’ support efforts that likely would be of significance to agency managers and warrant follow-up analysis. About one-third of survey respondents rated their regulators’ efforts for minority banks as very good or good, while 26 percent rated the efforts as fair, 13 percent as poor or very poor, and 25 percent responded “do not know.” FDIC-regulated banks were more positive about their agency’s efforts than banks that other agencies regulated. However, only about half of the FDIC-regulated banks and about a quarter of the banks regulated by other agencies rated their agency’s efforts as very good or good. Although regulators may emphasize the provision of technical assistance to minority banks, less than 30 percent of such institutions said they had used such agency services within the last 3 years. Therefore, the banks may have been missing opportunities to address problems that limited their operations or financial performance. As we found in our 1993 report, some minority bank officials also said that examiners did not always understand the challenges that the banks may face in providing services in their communities or operating environments. Although the bank officials said they did not expect special treatment in the examination process, they suggested that examiners needed to undergo more training to improve their understanding of minority banks and the customer base they serve. To allow the regulators to better understand the effectiveness of their support efforts, our October 2006 report recommended that the regulators review such efforts and, in so doing, consider employing the following methods: (1) regularly surveying the minority banks under their supervision on all efforts and regulatory areas affecting these institutions; or (2) establishing outcome-oriented performance measures to evaluate the extent to which their efforts are achieving their objectives. Subsequent to the report’s issuance, the regulators have reported taking steps to better assess or enhance their minority bank support efforts. For example, all of the regulators have developed surveys or are in the process of consulting with minority banks to obtain feedback on their support efforts. I also note that some regulators plan to provide additional training to their examiners on minority bank issues. These initiatives are positive developments, but it is too soon to evaluate their effectiveness. We encourage agency officials to ensure that they collect and analyze relevant data and take steps to enhance their minority bank support efforts as may be warranted. Many minority banks are located in urban areas and seek to serve distressed communities and populations that financial institutions traditionally have underserved. For example, after the Civil War, banks were established to provide financial services to African-Americans. More recently, Asian-American and Hispanic-American banks have been established to serve the rapidly growing Asian and Hispanic communities in the United States. In our review of regulators’ lists of minority banks, we identified a total minority bank population of 195 for 2005 (see table 1). Table 2 shows that the distribution of minority banks by size is similar to the distribution of all banks by size. More than 40 percent of all minority banks had assets of less than $100 million. Each federally insured depository institution, including each minority bank, has a primary federal regulator. As shown in table 3, FDIC serves as the primary federal regulator for more than half of minority banks—109 of the 195 banks, or 56 percent—and the Federal Reserve regulates the fewest. The federal regulators primarily focus on ensuring the safety and soundness of banks and do so through on-site examinations and other means. Regulators may also close banks that are deemed insolvent and posing a risk to the Deposit Insurance Fund. FDIC is responsible for ensuring that the deposits in failed banks are protected up to established deposit insurance limits. While the regulators’ primary focus is bank safety and soundness, laws and regulations can identify additional goals and objectives. Recognizing the importance of minority banks, Section 308 of FIRREA outlined five broad goals toward which FDIC and OTS, in consultation with Treasury, are to work to preserve and promote minority banks. These goals are: preserving the present number of minority banks; preserving their minority character in cases involving mergers or acquisitions of minority banks; providing technical assistance to prevent insolvency of institutions that are not currently insolvent; promoting and encouraging the creation of new minority banks; and providing for training, technical assistance, and education programs. Technical assistance is typically defined as one-to-one assistance that a regulator may provide to a bank in response to a request. For example, a regulator may advise a bank on compliance with a particular statute or regulation. Regulators also may provide technical assistance to banks that is related to deficiencies identified in safety and soundness examinations. In contrast, education programs typically are open to all banks regulated by a particular agency or all banks located within a regulator’s regional office. For example, regulators may offer training for banks to review compliance with laws and regulations. As shown in figure 1, our 2006 report found that, according to FDIC data, most minority banks with assets exceeding $100 million had ROAs in 2005 that were close to those of their peer groups, while many smaller banks had ROAs that were significantly lower than those of their peers. Minority banks with more than $100 million in assets accounted for 58 percent of all minority banks, while those with less than $100 million accounted for 42 percent. Each size category of minority banks with more than $100 million in assets had a weighted average ROA that was slightly lower than that of its peers, but in each case their ROAs exceeded 1 percent. By historical banking industry standards, an ROA of 1 percent or more generally has been considered to indicate an adequate level of profitability. We found that profitability of the larger minority, Hispanic-American, Asian- American, Native American, and women-owned banks were close to, and in some cases exceeded, the profitability of their peers in 2005. In contrast, small minority banks (those with assets of less than $100 million) had an average ROA of 0.4 percent, and their peers had an average ROA of 1 percent. Our analysis of FDIC data for 1995 and 2000 also indicated some similar patterns, with minority banks with assets greater than $100 million showing levels of profitability that generally were close to those of their peers, or ROAs of about 1 percent, and minority banks with assets of less than $100 million showing greater differences with their peers. The profitability of African-American banks generally has been below that of their peers in all size categories (see fig. 2). For example, African- American banks with less than $100 million in assets—which constitute 61 percent of all African-American banks—had an average ROA of 0.16 percent, while their peers averaged 1.0 percent. Our analysis of FDIC data for 2000 and 1995 also found that African-American banks of all sizes had lower ROAs than their peers. Our analysis of 2005 FDIC data also suggests some possible reasons for the differences in profitability between some minority banks and their peers. For example, our analysis of 2005 FDIC data showed that African- American banks with assets of less than $300 million—which constitute 87 percent of all African-American banks—had significantly higher loan loss reserves as a percentage of their total assets than the average for their peers (see fig. 3). Although having higher loan loss reserves may be necessary for the safe and sound operation of any particular bank, they lower bank profits because loan loss reserves are counted as expenses. We also found some evidence that higher operating expenses might affect the profitability of some minority banks. Operating expenses— expenditures for items such as administrative expenses and salaries— typically are compared to an institution’s total earning assets, such as loans and investments, to indicate the proportion of earning assets that banks spend on operating expenses. As figure 4 indicates, many minority banks with less than $100 million in assets had higher operating expenses than their peers in 2005. Academic studies we reviewed generally reached similar conclusions. Officials from several minority banks we contacted also described aspects of their operating environment, business practices, and customer service that could result in higher operating costs. In particular, the officials cited the costs associated with providing banking services in low-income urban areas or in communities with high immigrant populations. Bank officials also told us that they focus on fostering strong customer relationships, sometimes providing financial literacy services. Consequently, as part of their mission these banks spend more time and resources on their customers per transaction than other banks. Other minority bank officials said that their customers made relatively small deposits and preferred to do business in person at bank branch locations rather than through potentially lower-cost alternatives, such as over the phone or the Internet. Minority bank officials also cited other factors that may have limited their profitability. In particular, in response to Community Reinvestment Act (CRA) incentives, the officials said that larger banks and other financial institutions were increasing competition for minority banks’ traditional customer base. The officials said that larger banks could offer loans and other financial services at more competitive prices because they could raise funds at lower rates and take advantage of operational efficiencies. In addition, officials from some African-American and Hispanic banks cited attracting and retaining quality staff as a challenge to their profitability. Despite these challenges, officials from banks across minority groups were optimistic about the financial outlook for their institutions. When asked in our survey to rate their financial outlook compared to those of the past 3 to 5 years, 65 percent said it would be much or slightly better; 21 percent thought it would be about the same, and 11 percent thought it would be slightly or much worse, while 3 percent did not know. Officials from minority banks said that their institutions had advantages in serving minority communities. For example, officials from an Asian-American bank said that the staff’s ability to communicate in the customers’ primary language provided a competitive advantage. Our report found that FDIC—which supervises 109 of 195 minority banks—had developed the most extensive efforts to support minority banks among the banking regulators (see fig. 5). FDIC had also taken the lead in coordinating regulators’ efforts in support of minority banks, including leading a group of all the banking regulators that meets semiannually to discuss individual agency initiatives, training and outreach events, and each agency’s list of minority banks. OTS had developed a variety of support programs, including developing a minority bank policy statement and staffing support structure. OCC had also taken steps to support minority banks, such as developing a policy statement. OCC and the Federal Reserve had also hosted events for some minority banks. The following highlights some key support activities discussed in our October 2006 report. Policy Statements. FDIC, OTS, and OCC all have policy statements that outline the agencies’ efforts for minority banks. They discuss how the regulators identify minority banks, participate in minority bank events, provide technical assistance, and work toward preserving the character of minority banks during the resolution process. OCC officials told us that they developed their policy statement in 2001 after an interagency meeting of the federal banking regulators on minority bank issues. Both FDIC and OTS issued policy statements in 2002. Staffing Structure. FDIC has a national coordinator in Washington, D.C. and coordinators in each regional office from its Division of Supervision and Consumer Protection to implement the agency’s minority bank program. Among other responsibilities, the national coordinator regularly contacts minority bank trade associations about participation in events and other issues, coordinates with other agencies, and compiles quarterly reports for the FDIC chairman based on regional coordinators’ reports on their minority bank activities. Similarly, OTS has a national coordinator in its headquarters and supervisory and community affairs staff in each region who maintain contact with the minority banks that OTS regulates. While OCC and the Federal Reserve did not have similar staffing structures, officials from these agencies had contacted minority banks among their responsibilities. Minority Bank Events and Training. FDIC has taken the lead role in sponsoring, hosting, and coordinating events in support of minority banks. For example, in August 2006 FDIC sponsored a national conference for minority banks in which representatives from OTS, OCC, and the Federal Reserve participated. FDIC also has sponsored the Minority Bankers Roundtable (MBR) series, which agency officials told us was designed to provide insight into the regulatory relationship between minority banks and FDIC and explore opportunities for partnerships between FDIC and these banks. In 2005, FDIC held six roundtables around the country for minority banks supervised by all of the regulators. To varying degrees, OTS, OCC, and the Federal Reserve also have held events to support minority banks, such as Native American Institutions. Technical Assistance. All of the federal banking regulators told us that they provided their minority banks with technical assistance if requested, but only FDIC and OTS have specific procedures for offering this assistance. More specifically, FDIC and OTS officials told us that they proactively seek to make minority banks aware of such assistance through established outreach procedures outside of their customary examination and supervision processes. FDIC also has a policy that requires its regional coordinators to ensure that examination case managers contact minority banks from 90 to 120 days after an examination to offer technical assistance in any problem areas that were identified during the examination. This policy is unique to minority banks. OCC and the Federal Reserve provide technical assistance to all of their banks, but had not established outreach procedures for all their minority banks outside of the customary examination and supervision processes. However, OCC officials told us that they were in the process of developing an outreach plan for all minority banks regulated by the agency. Federal Reserve officials told us that Federal Reserve districts conduct informal outreach to their minority banks and consult with other districts on minority bank issues as needed. Policies to Preserve the Minority Character of Troubled Banks. FDIC has developed policies for failing banks that are consistent with FIRREA’s requirement that the agency work to preserve the minority character of minority banks in cases of mergers and acquisitions. For example, FDIC maintains a list of qualified minority banks or minority investors that may be asked to bid on the assets of troubled minority banks that are expected to fail. However, FDIC is required to accept the bids on failing banks that pose the lowest expected cost to the Deposit Insurance Fund. As a result, all bidders, including minority bidders, are subject to competition. OTS and OCC have developed written policies that describe how the agencies will work with FDIC to identify qualified minority banks or investors to acquire minority banks that are failing. While the Federal Reserve does not have a similar written policy, agency officials say that they also work with FDIC to identify qualified minority banks or investors. All four agencies also said that they try to assist troubled minority banks improve their financial condition before it deteriorates to the point that a resolution through FDIC becomes necessary. For example, agencies may provide technical assistance in such situations or try to identify other minority banks willing to acquire or merge with the troubled institutions. While FDIC was proactive in assessing its support efforts for minority banks, none of the regulators routinely and comprehensively surveyed their minority banks on all issues affecting the institutions, nor have the regulators established outcome-oriented performance measures. Evaluating the effectiveness of federal programs is vitally important to manage programs successfully and improve program results. To this end, in 1993 Congress enacted the Government Performance and Results Act, which instituted a governmentwide requirement that agencies report on their results in achieving their agency and program goals. As part of its assessment methods, FDIC conducted roundtables and surveyed minority banks on aspects of its minority bank efforts. For example, in 2005, FDIC requested feedback on its efforts from institutions that attended the agency’s six MBRs (which approximately one-third of minority banks attended). The agency also sent a survey letter to all minority banks to seek their feedback on several proposals to better serve such institutions, but only 24 minority banks responded. The proposals included holding another national minority bank conference, instituting a partnership program with universities, and developing a minority bank museum exhibition. FDIC officials said that they used the information gathered from the MBRs and the survey to develop recommendations for improving programs and developing new initiatives. While FDIC had taken steps to assess the effectiveness of its minority bank support efforts, we identified some limitations in its approach. For example, in FDIC’s surveys of minority banks, the agency did not solicit feedback on key aspects of its support efforts, such as the provision of technical assistance. Moreover, FDIC has not established outcome- oriented performance measures to gauge the effectiveness of its various support efforts. None of the other regulators had surveyed minority banks recently on support efforts or developed performance measures. By not taking such steps, we concluded that the regulators were not well positioned to assess their support efforts or identify areas for improvement. Further, the regulators could not take corrective action as necessary to provide better support efforts to minority banks. Minority bank officials we surveyed identified potential limitations in the regulators’ efforts to support them and related regulatory issues, such as examiners’ understanding of issues affecting minority banks, which would likely be of significance to agency managers and warrant follow-up analysis. Some 36 percent of survey respondents described their regulators’ efforts as very good or good, 26 percent described them as fair, and 13 percent described the efforts as poor or very poor (see fig. 6). A relatively large percentage—25 percent—responded “do not know” to this question. Banks’ responses varied by regulator, with 45 percent of banks regulated by FDIC giving very good or good responses, compared with about 25 percent of banks regulated by other agencies. However, more than half of FDIC-regulated banks and about three-quarters of the other minority banks responded that their regulator’s efforts were fair, poor, or very poor or responded with a “do not know.” In particular, banks regulated by OTS gave the highest percentage of poor or very poor marks, while banks regulated by the Federal Reserve most often provided fair marks. Nearly half of minority banks reported that they attended FDIC roundtables and conferences designed for minority banks, and about half of the 65 respondents that attended these events found them to be extremely or very useful (see fig. 7). Almost a third found them to be moderately useful, and 17 percent found them to be slightly or not at all useful. One participant commented that the information was useful, as was the opportunity to meet the regulators. Many banks also commented that the events provided a good opportunity to network and share ideas with other minority banks. While FDIC and OTS emphasized technical services as key components of their efforts to support minority banks, less than 30 percent of the institutions they regulate reported using such assistance within the last 3 years (see fig. 8). Minority banks regulated by OCC and the Federal Reserve reported similarly low usage of technical assistance services. However, of the few banks that used technical assistance—41—the majority rated the assistance provided as extremely or very useful. Further, although small minority banks and African-American banks of all sizes have consistently faced financial challenges and might benefit from certain types of assistance, the banks also reported low rates of usage of the agencies’ technical assistance. While our survey did not address the reasons that relatively few minority banks appear to use the technical assistance and banking regulators cannot compel banks under their supervision to make use of offered technical assistance, the potential exists that many such institutions may be missing opportunities to learn how to correct problems that limit their operational and financial performance. More than 80 percent of the minority banks we surveyed responded that their regulators did a very good or good job of administering examinations, and almost 90 percent felt that they had very good or good relationships with their regulator. However, as in our 1993 report, some minority bank officials said in both survey responses and interviews that examiners did not always understand the challenges the banks faced in providing services in their particular communities. Twenty-one percent of survey respondents mentioned this issue when asked for suggestions about how regulators could improve their efforts to support minority banks, and several minority banks that we interviewed elaborated on this topic. The bank officials said that examiners tended to treat minority banks like any other bank when they conducted examinations and thought such comparisons were not appropriate. For example, some bank officials whose institutions serve immigrant communities said that their customers tended to do business in cash and carried a significant amount of cash because banking services were not widely available or trusted in the customers’ home countries. Bank officials said that examiners sometimes commented negatively on the practice of customers doing business in cash or placed the bank under increased scrutiny relative to the Bank Secrecy Act’s requirements for cash transactions. While the bank officials said that they did not expect preferential treatment in the examination process, several suggested that examiners undergo additional training so that they could better understand minority banks and the communities that these institutions served. FDIC has conducted such training for its examiners. In 2004, FDIC invited the president of a minority bank to speak to about 500 FDIC examiners on the uniqueness of minority banks and the examination process. FDIC officials later reported that the examiners found the discussion helpful. Many survey respondents also said that a CRA provision that was designed to assist their institutions was not effectively achieving this goal. The provision allows bank regulators conducting CRA examinations to give consideration to banks that assist minority banks through capital investment, loan participation, and other ventures that help meet the credit needs of local communities. Despite this provision, only 18 percent of survey respondents said that CRA had—to a very great or great extent— encouraged other institutions to invest in or form partnerships with their institutions, while more than half said that CRA encouraged such activities to some, little, or no extent (see fig. 9). Some minority bankers attributed their view that the CRA provision has not been effective, in part, to a lack of clarity in interagency guidance on the act’s implementation. They said that the interagency guidance should be clarified to assure banks that they will receive CRA consideration in making investments in minority banks. Our 2006 report recommended that the bank regulators regularly review the effectiveness of their minority bank support efforts and related regulatory activities and, as appropriate, make changes necessary to better serve such institutions. In conducting such reviews, we recommended that the regulators consider conducting periodic surveys of minority banks or developing outcome-oriented performance measures for their support efforts. In conducting such reviews, we also suggested that the regulators focus on the overall views of minority banks about support efforts, the usage and effectiveness of technical assistance (particularly assistance provided to small minority and African-American banks), and the level of training provided to agency examiners on minority banks and their operating environments. Over the past year, bank regulatory officials we contacted identified several steps that they have initiated to assess the effectiveness of their minority bank support efforts or to enhance such support efforts. They include the following actions: A Federal Reserve official told us that the agency has established a working group that is developing a pilot training program for minority banks and new banks. The official said that three training modules have been drafted for different phases of a bank’s life, including starting a bank, operating a bank during its first 5 years of existence, and bank expansion. The official said that the program will be piloted throughout the U.S. beginning in early November 2007. Throughout the course of developing, drafting, and piloting the program, Federal Reserve officials said they have, and will continue to, consult with minority bankers to obtain feedback on the effort. An OCC official said that the agency recently sent a survey to minority banks on its education, outreach, and technical assistance efforts that should be completed by the end of October. OCC also plans to follow up this survey with a series of focus groups. In addition, the official said OCC just completed an internal survey of certain officials involved in supervising minority institutions, and plans to review the results of the two surveys and focus groups to improve its minority bank support efforts. FDIC officials told us that the agency has developed a survey to obtain feedback on the agency’s minority bank support efforts. They estimate that the survey will be sent out to all minority institutions (not just those minority banks FDIC supervises) in mid-December 2007. An OTS official told us that the agency will send out a survey to the minority banks the agency supervises on its efforts in the next couple weeks and that it has also conducted a series of roundtables with minority banks in the past year. The federal banking agencies have also taken some steps to address other issues raised in our report. For example, Federal Reserve and FDIC officials told us that that the agencies will provide additional training on minority bank issues to their examiners. In addition, in July 2007 the federal banking agencies published a CRA Interagency Notice that requested comments on nine new “Questions and Answers” about community reinvestment. One question covers how majority banks may engage in and receive positive CRA consideration for activities conducted with minority institutions. An OCC official said that the comments on the proposed “Q and As” are under review. While the regulators’ recent efforts to assess and enhance their minority bank support efforts and other activities are encouraging, it is too soon to assess their effectiveness. For example, the Federal Reserve’s pilot training program for minority and new banks is not scheduled to begin until later this year. Further, the other regulators’ efforts to survey minority banks on support efforts generally also are at an early stage. We encourage agency officials to ensure that they collect and analyze relevant data and take steps to enhance their minority bank support efforts as warranted. Mr. Chairman, this concludes my prepared statement. I would be happy to address any questions that you or subcommittee members may have. For further information about this testimony, please contact George A. Scott on (202) 512-7215 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions include Wesley M. Phillips, Assistant Director; Allison Abrams; Kevin Averyt; and Barbara Roesmann. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Minority banks can play an important role in serving the financial needs of historically underserved communities and growing populations of minorities. For this reason, the Financial Institutions, Reform, Recovery, and Enforcement Act of 1989 (FIRREA) established goals that the Federal Deposit Insurance Corporation (FDIC) and the Office of Thrift Supervision (OTS) must work toward to preserve and promote such institutions (support efforts). While not required to do so by FIRREA, the Board of Governors of the Federal Reserve System (Federal Reserve) and Office of the Comptroller of the Currency (OCC) have established some minority bank support efforts. This testimony, based on a 2006 General Accountability Office (GAO) report, discusses the profitability of minority banks, regulators' support and assessment efforts, and the views of minority banks on the regulators' efforts as identified through responses from a survey of 149 such institutions. GAO reported in 2006 that the profitability of most large minority banks (assets greater than $100 million) was nearly equal to that of their peers (similarly sized banks) in 2005 and earlier years, according to FDIC data. However, many small minority banks and African-American banks of all sizes were less profitable than their peers. GAO's analysis and other studies identified some possible explanations for these differences, including relatively higher loan loss reserves and operating expenses and competition from larger banks. Bank regulators had adopted differing approaches to supporting minority banks, but no agency had regularly and comprehensively assessed the effectiveness of its efforts. FDIC--which supervises over half of all minority banks--had the most comprehensive support efforts and leads interagency efforts. OTS focused on providing technical assistance to minority banks. While not required to do so by FIRREA, OCC and the Federal Reserve had taken some steps to support minority banks. Although FDIC had recently sought to assess the effectiveness of its support efforts through various methods, none of the regulators comprehensively surveyed minority banks or had developed performance measures. Consequently, the regulators were not well positioned to assess their support efforts. GAO's survey of minority banks identified potential limitations in the regulators' support efforts that would likely be of significance to agency managers and warrant follow-up analysis. Only about one-third of survey respondents rated their regulators' efforts for minority banks as very good or good, while 26 percent rated the efforts as fair, 13 percent as poor or very poor, and 25 percent responded "don't know". Banks regulated by FDIC were more positive about their agency's efforts than banks regulated by other agencies. However, only about half of the FDIC-regulated banks and about a quarter of the banks regulated by other agencies rated their agency's efforts as very good or good. Although regulators may have emphasized the provision of technical assistance to minority banks, less than 30 percent of such institutions have used such agency services within the last 3 years and therefore may be missing opportunities to address problems that limit their operations or financial performance.
The ATA program was established in 1983 to provide assistance to foreign countries in enhancing the ability of their law enforcement personnel to deter terrorists and terrorist groups from engaging in international terrorist acts such as bombing, kidnapping, assassination, hostage taking, and hijacking. The stated purposes of the ATA program’s activities are to (1) enhance the antiterrorism skills of friendly countries by providing counterterrorism training and equipment; (2) strengthen bilateral ties with partner nations by offering assistance; and (3) increase respect for human rights by sharing modern, humane, and effective antiterrorism techniques with foreign civil authorities. Within State, management of the ATA program is undertaken as a partnership between the Bureau of Counterterrorism and Countering Violent Extremism (CT), which conducts policy formulation, strategic guidance, and oversight, and the Bureau of Diplomatic Security (DS), which administers and implements the program. In addition, ATA officials work with officials from State’s regional bureaus and Regional Security Officers at U.S. posts overseas to help ensure that appropriate ATA participants are selected to receive training. Regional Security Officers also help ensure that ATA activities target key focus areas, including the threat of terrorism, individual country-level operational needs, and the advancement of U.S. national security interests. ATA uses its own training experts as well as those from other U.S. federal, state, and local law enforcement agencies, police associations, and private security firms and consultants to deliver a blend of training, mentoring, equipment, advising, and consulting to partner nations. As shown in figure 1, in fiscal years 2012 through 2016, State allocated approximately $715 million to the ATA program for training, mentoring, equipment, and other services to help partner nations build or enhance their counterterrorism capabilities. As shown in table 1, State has obligated or disbursed about $543 million (76 percent) of the approximately $715 million allocated to ATA in fiscal years 2012 through 2016. Of the $172 million in unobligated funds, $136 million (79 percent) are fiscal year 2016 funds still available for obligation through the end of fiscal year 2017. About $36 million of that $172 million in unobligated balances were allocated in fiscal years 2012 through 2015 and thus, funds that were not obligated within the initial period of availability for new obligations have expired. The Joint Explanatory Statement to the Consolidated Appropriations Act, 2017, directs State to conduct a review of unobligated ATA balances from fiscal year 2016. State has reported that, since its inception in 1983, the ATA program has trained and assisted more than 84,000 foreign security and law enforcement officials from 154 countries. As shown in figure 2, in fiscal years 2012 through 2016, State provided bilateral ATA assistance to 34 partner nations. State implements ATA training through the GATA contract signed in December 2011 and in effect during fiscal years 2012 through 2016, according to State officials. ATA officials told us that they secured two prime contractors to implement this contract who, in turn, manage subcontracts with several training facilities. The majority of ATA training occurs at facilities located abroad, either at facilities in recipient nations or at regional facilities. For example, ATA has agreements in place with the government of Jordan to use multiple facilities there to deliver ATA training to participants from Jordan as well as from other U.S. partner nations. According to State officials, as of June 2017, State was also negotiating an agreement to use facilities in Kenya for regional ATA training. In addition to overseas locations, about 10 percent of the ATA courses in fiscal years 2012 through 2016 were delivered at training facilities in the United States. State officials told us that training ATA participants at domestic facilities also offers senior U.S. government officials an opportunity to interact with partner nation officials, both of whom benefit from the direct diplomatic interaction. While classroom training is conducted in various localities across the United States, according to ATA officials, two facilities have been subcontracted to deliver tactical training: The O’Gara Group (O’Gara) located in Montross, VA, and Academi a Constellis Company (Academi), located in Moyock, NC. In addition, State officials told us about two relevant modifications to ATA program management that they are making or plan to make in relation to the contract used to secure services for the delivery of ATA training and locations to be used for ATA training activities. First, in March 2017, State issued a request for proposals (RFP) for a new GATA contract that makes some technical modifications to language we identified during our engagement and that ATA officials determined was unclear. Second, according to State officials, in fiscal year 2017, State finalized a shift of nearly all training delivered at facilities in the United States to locations in partner nations or regional training centers outside the United States. According to State officials, this approach is expected to generate savings on costs such as international travel and accommodations. Further, the RFP for the new training contract states a preference to use State’s planned Foreign Affairs Security Training Center, when it becomes available, for any tactical training that is delivered in the United States. To help ensure that U.S. assistance is not used to support those who violate human rights, U.S. law prohibits the provision of assistance to foreign security forces implicated in human rights abuses. Section 620M of the Foreign Assistance Act of 1961 (also known as the State Leahy law) prohibits the United States from providing assistance under the Foreign Assistance Act or the Arms Export Control Act to any unit of the security forces of a foreign country if the Secretary of State has credible information that such unit has committed a gross violation of human rights. In response to the State Leahy law, State has established a human rights vetting process to determine whether there is credible information of a gross violation of human rights for any potential recipient of assistance, such as ATA training. In accordance with State guidance, State may conduct individual or unit-level vetting, depending on the circumstances. This process generally consists of vetting by personnel representing selected agencies and State offices at U.S. embassies and at State headquarters in Washington, D.C.; State’s Bureau of Democracy, Human Rights, and Labor (DRL); and the relevant geographic bureau. These personnel are to screen prospective recipients nominated to receive assistance by searching relevant files, databases, and other sources of information for credible information about gross violations of human rights. Each embassy determines which agencies and State offices should participate in the embassy’s vetting process and, according to ATA officials, each individual’s unit affiliation if conducting unit-level vetting. Among other duties, DRL is responsible for overseeing the vetting process and for developing human rights vetting policies, in coordination with the regional and relevant functional bureaus. State processes, documents, and tracks human rights vetting requests and results through its INVEST system, a web-based database. ATA is to receive a list of vetted individuals from DRL, through INVEST, and requires the GATA contractors to cross-check that list with the participants who attend the first day of training to ensure that each has been vetted before any course information is presented. Conducting ATA training in the United States rather than at locations abroad requires additional logistical procedures that State and DHS must undertake, including issuing visas and granting admission to participants traveling to the United States, respectively. Prior to training in the United States, ATA participants must apply for a visa at a U.S. embassy or consulate abroad or with State’s Bureau of Consular Affairs. State’s consular officers evaluate visa applications and issue nonimmigrant A-2 visas—those for foreign government officials and employees traveling to the United States to engage solely in official duties or activities on behalf of their national government—to eligible travelers coming to the United States for ATA training. When foreign nationals arrive at a U.S. port of entry for admission to the United States to attend domestic ATA training, DHS officials determine whether to admit them into the United States. DHS officials grant ATA participants admission for what the agency refers to as duration of status. According to State officials, ATA participants’ status is generally tied to their participation in the associated ATA course and, therefore, they will generally only be recognized as entitled to A-2 status during participation in the ATA training and reasonable travel to and from the United States. While ATA participants are in the United States, they may be permitted to apply to DHS for certain immigration benefits and changes in immigration status, such as for asylum. According to DHS officials, ATA participants also are not subject to travel restrictions and can depart training facilities for purposes such as tourism and visiting family living in the United States, so long as they also maintain their status as participants of their ATA courses, by not being absent from training, for example. According to State and DHS officials, if participants miss ATA course activities without authorization, and do not attain an alternative immigration status, they may become subject to removal procedures. ATA officials told us that, upon arrival at domestic training facilities, ATA participants receive a briefing from officials or contractors to ensure that they understand that they should not depart the facility without authorization and that any unauthorized departure will be reported to ATA for further action. State and DHS officials told us that, because ATA participants are admitted “for the duration of the period for which the alien continues to be recognized by the Secretary of State as being entitled to that status,” it is State’s responsibility to determine whether participants are entitled to A-2 status upon request by DHS. According to these officials, DHS cannot take any related enforcement action until State has confirmed that participants are no longer entitled to A-2 status. Once State has done so, DHS officials search U.S. Citizenship and Immigration Services databases to determine whether the participants in question have filed for a change in status or other benefits. According to DHS officials, if participants have not applied for or have been denied changes in status or other benefits, DHS may seek to remove them from the country on the grounds that they have violated the terms of their admission. State’s steps to oversee the security of the tactical training facilities used for domestic ATA training are predicated on the GATA contract. This contract has general requirements for the secure storage of equipment, including weapons and explosives, and some more specific requirements related to obtaining licenses and for controlling access to explosives ranges and armories. The contract requires, among other provisions, that tactical training facilities have secure storage for all explosives, ammunition, and equipment. The contract also states that the armory shall be secured and alarmed and have climate-controlled weapons storage and a maintenance shop. Finally, contractors are required to have the necessary federal, state, and local permits for the storage of weapons, ammunition, and explosives. The contract stipulates that explosives storage areas and facilities shall meet all federal, state, and local criteria for safe and secure storage. For example, the federal Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF) has promulgated a regulatory framework for explosives storage, possession, and use, including licensing criteria specifying that ATF may verify by inspection that applicants for permits and licenses have places to store explosive materials that meet certain safety and security requirements. The regulations also dictate the type of material from which the storage containers are to be made, depending on the type of explosives to be stored, and the types of locks that should be used to secure the containers. State reports receiving copies of the facilities’ federal licenses for storing, transporting, and handling explosives for the relevant training facilities. In addition, State oversees the GATA contract, including facility security provisions, through visits to the subcontracted training facilities and frequent interactions with the contractors. Facility visits. State officials visit the training facilities used by ATA to review security and other aspects of training delivery. For example, following the award of the GATA contract, ATA subject matter experts conducted a survey of O’Gara’s training facility, which included an examination of whether the facility had secure storage and firing ranges. Officials said that they did not conduct a similar survey of the Academi training facility at the time the GATA contract was awarded because it had been previously certified under a prior ATA training contract. State and contractor officials said that ATA program managers who visit training facilities during course delivery also review the sites to ensure that they are in compliance with contract requirements. Frequent interactions. State and prime contractor officials told us that they meet weekly to discuss operational and planning issues. Officials noted that there is no set agenda for these meetings because the topics are driven by events, and all issues are open for discussion. Contractors that manage ATA’s domestic training have taken a variety of required and voluntary steps to ensure security at the tactical training facilities. State officials said that it is the responsibility of the prime contractors to ensure that the training facility subcontractors have the necessary federal, state, and local permits for the storage of weapons, ammunition, and explosives. Both facilities we visited—O’Gara and Academi—had relevant, unexpired licenses. For example, they had ATF licenses for transporting, storing, and possessing explosives. They also had state or local licenses such as the Commonwealth of Virginia Explosives Usage Permit, the Virginia Fire Marshall’s Office Certified Blaster Certification, and North Carolina county special use permits for firing ranges and training facilities. Moreover, the prime contractor that implements the majority of ATA training performs an annual audit of both tactical training facilities to assess compliance with the GATA contract, including its facility security provisions. The prime contractor found the facilities in compliance with those provisions of the GATA contract. As shown in figure 3, we also observed during our November 2016 site visits that both the O’Gara and Academi training facilities used locked explosives storage containers, as required by ATF. In addition, we observed that both training facilities had locked and alarmed armories, as required by the GATA contract, with the alarms monitored by private security companies. In addition to taking steps to meet the GATA contract requirements, both training facilities we visited have taken voluntary actions related to facility security. ATF suggests security measures including installing fences, security cameras, and locked gates to increase security; however, the measures are not ATF licensing requirements. During investigative surveillance operations and escorted facility site visits in September 2016 and November 2016, respectively, we observed that both domestic tactical training facilities included some of these suggested security measures such as fences and natural barriers to deter and prevent unauthorized access to the facilities, warning signs, secured gates, security patrols, and surveillance cameras. These security measures align with the ATF’s suggestions for storing and safeguarding explosive materials. For example, the Academi facility has one main entrance with a gate, warning signs, and 24-hour armed security guards. The facility’s natural barriers include woods and farmland, and officials said that bears and snakes also deter unauthorized access. The O’Gara facility is located next to a highway and has a main entrance with a gate, a warning sign, and an unarmed security guard during business hours. The facility’s other entrance is restricted by an access code-controlled electronic gate. The O’Gara facility’s natural barriers also include woods and farmland. Furthermore, as shown in figure 3 above, both facilities had fences surrounding the explosives storage containers, a practice suggested by ATF, and contractors told us that the fences are locked when the containers are not being used for training. In response to the December 2015 media reports mentioned earlier that alleged that its facility had potential security vulnerabilities, O’Gara made several changes to the physical security of its training facility. For example, officials said that in August 2016, the company constructed a wood fence to block public observation of one of the areas of the facility used during ATA training. The first photo in figure 4 shows that during our September 2016 investigative surveillance operation, we observed this wood fence and a lift barrier gate deterring vehicular access to the training grounds. The second photo in figure 4 shows a locked chain link fence that O’Gara officials told us they installed in October 2016, which we observed during our November 2016 site visit. O’Gara officials told us that in November 2016, they added slats to the newly installed chain link fence, to further reduce public observation, as shown in the third photo in figure 4. Using available ATA participant data, we confirmed that all ATA participants in a generalizable sample of 98 participants had been vetted at the individual or unit level or were not members of a security force with police powers and, therefore, did not require Leahy vetting, according to State officials and guidance. Of the 98 ATA participants in our sample, we determined that State had vetted 96 of those participants and that 2 participants were non-security forces without police powers and, therefore, did not require vetting. U.S. law prohibits assistance from being provided to any unit of the security forces of a foreign country if the Secretary of State has credible evidence that such unit has committed a gross violation of human rights. State has developed policies to prevent U.S. assistance from being used to provide training for units or individuals who have committed gross violations of human rights. We selected a generalizable random sample of 98 names from 2,271 available electronic records of ATA participants who had received training in the United States in fiscal years 2012 through 2016 and for whom ATA officials confirmed that vetting was required by State guidance. We cross-checked these names and associated training dates with human rights vetting data from State’s INVEST system—used to process and document human rights vetting—to determine if they were vetted before receiving training. For any participants for whom we could not readily confirm vetting, we worked with DRL and ATA officials to identify additional supporting evidence to confirm that participants had been vetted before training was provided. For example, DRL provided us with records from INVEST based on the use of “ATA” in INVEST’s funding source field. However, in some instances, vetting officials had used the broader category of funding of which ATA funds are a subset; as a result, those INVEST records were not included in the original data provided to us. In addition to prohibitions related to human rights violations, U.S. law prohibits assistance from being provided to any country if the Secretary of State has determined that the government of that country has repeatedly provided support for acts of international terrorism. From fiscal years 2012 through 2016, those countries were Cuba, Iran, Sudan, and Syria, none of which were ATA recipients during that time period. Beyond country-level prohibitions on support for state sponsors of terrorism, there is no formal requirement to screen individuals for terrorist activities, according to State officials. However, State includes criminal and terrorism screenings as part of its process at both the embassy and headquarters levels for checking the names of potential ATA participants before nominating them. For example, State described the process by which U.S. embassy officials conduct name checks through access to a variety of law enforcement databases, including the Terrorist Screening Center’s Terrorist Screening Database. The Terrorist Screening Database contains information about individuals known or suspected to be or to have been engaged in conduct constituting, in preparation for, in aid of, or related to terrorism and terrorist activities. In addition, State officials said that regional bureau personnel conduct terrorist activity screening of ATA participants through a national counterterrorism database. Further, State officials said that all visa applicants, including ATA participants, are subject to a standard suite of screening tools. ATA program data on the courses that ATA delivered in fiscal years 2012 through 2016, and the participants of those courses, are incomplete and inaccurate. ATA collects and maintains electronic information about delivered ATA courses and the participants of those courses in two separate systems: Snapshot, for course data, and the Student Training and Reporting Systems, for participant data. In response to our request for data from these systems, State initially provided data from the participant data system that included about 16,000 participants, rather than the more than 56,000 participants ATA reported training in fiscal years 2012 through 2016. In response to our questions about the completeness of these data, State undertook an effort to review available e-mail-based and other participant data that had not been systematically added to its participant data system and provided us with a revised response that included about 8,600 additional records. Therefore, the revised electronic participant data, which included about 25,000 participant records, remained incomplete, missing records for more than half of the reported 56,000 participants. In addition, the participant and course records that were included in the revised data were not always accurate. Course data. ATA course data are incomplete in that the data do not include all delivered courses. For the 4 fiscal years 2012 through 2015, ATA reported that 1,987 courses were delivered. The course data ATA provided to us included only 1,633 courses, or about 82 percent, of the courses ATA reported to have delivered in those 4 years. Our analysis of ATA participant data similarly indicates that the course data are incomplete, as some courses listed in the participant data were not included in the course data. For example, we identified 25 participant records that were associated with a Senior Crisis Management course that was not included in the course data. In addition to being incomplete, ATA course data elements are not always accurate. For example, the number of “participants” included in the course records ATA provided to us was not always accurate. ATA officials told us that while some course records may have initially included the maximum number of participants a course could accommodate, it was intended that records would be updated with the number of actual participants following the conclusion of training. In reviewing fiscal years 2012 through 2016 course records, ATA officials noted that some records may not have been updated and, therefore, they could not tell us if the participant numbers included in the course data represented the maximum capacity of a course or the number of participants who ultimately attended each course. Notwithstanding these weaknesses, ATA officials told us that the number of participants included in the course data system from which data were provided to us is used to report the official number of ATA students trained. However, the aggregate number of participants included in the course data provided to us included about 41,000 participants, or about 75 percent of the 56,000 participants ATA reported to have trained in fiscal years 2012 through 2016. We were not able to determine which total participant number was more reliable. ATA officials told us that the difference between the two figures might be explained by “in-house” training—such as sustainment training and mentorship—that was delivered to participants in Afghanistan and that was not captured in electronic data systems. ATA officials said that they plan to begin capturing such information this fiscal year, 2017. Participant data. Data on individual participants that the ATA program collects and maintains in its electronic participant data system are also incomplete. As noted above, data in ATA’s participant data system account for only about 25,000, or less than half of the 56,000 participants ATA reported to have trained in fiscal years 2012 through 2016. In addition, some individual electronic participant records do not contain complete information for all elements that the system is designed to capture. For instance, while ATA policy instructs officials to collect participant unit affiliations, we found that 15 percent of the approximately 25,000 participant records that ATA provided to us did not include information on each participant’s current assigned unit. In addition to our concerns of completeness, we found that elements of the participant records included in the electronic data were not always accurate. For example, some participant records included course dates that did not align either with course dates identified in ATA’s course data or provided to us directly by the contractors who delivered the training. Moreover, the recipient partner nation included in some of ATA’s participant records was incorrect. For example, we identified 27 participant records with the partner nation incorrectly entered as Jordan, which was the location of the regional training facility where the training occurred, rather than the home country of the participants who had received the training. Further, hundreds of records noted a government agency, such as Ministry of the Interior, or broad job type, such as police, in the “unit” data field, rather than a unit name. In addition, some participant records included a job title, such as security officer, in that data field. ATA officials acknowledged weaknesses in their processes to capture course and participant data and have taken some steps to improve the completeness of their participant data since the initiation of our review. First, as previously described, in response to our requests for data, ATA officials undertook an effort to add participant records that previously existed only in e-mails to the electronic participant data system. With this effort, ATA officials identified about 8,600 records that they added to their electronic participant data system and that we included in the data we used for our analysis. Second, officials noted that, partly in response to our ongoing review, ATA revised the standard operating procedures for data collection in November 2016 to more clearly guide staff who enter data into and use the course and participant data systems. For example, the revised procedures clarify the information that officials should capture in the “participant” field of the course data system, noting that when entering the numbers under the participant field, officials should enter the number of participants who actually participated in the course and not the maximum number of participants the course can accommodate. In addition, the revised procedures outline steps that officials should take to help ensure the quality of information in the participant data system and the alignment of that participant data with information in the separate course data system. State’s Foreign Affairs Manual notes the importance of producing and maintaining adequate documentation of agency activities. In addition, ATA policy instructs officials to collect student names and unit affiliations, among other things, and State’s fiscal year 2014 Full Performance Plan Report identifies the “number of individuals in the security sector trained in counterterrorism knowledge and skills” as a performance indicator for the ATA program for fiscal years 2014 through 2017. Further, the Standards for Internal Control in the Federal Government state that agencies should clearly document transactions and all significant events. This could include records of courses delivered and participants trained. Federal internal control standards also state that management should periodically review procedures and related control activities to determine that those activities are implemented appropriately. Although ATA has revised its data collection procedures with the intent to improve data completeness and accuracy, ATA officials told us that prior standard operating procedures to capture electronic data have not always been followed. For example, they explained that a series of personnel changes involving staff responsible for data entry led to inconsistent implementation of the data collection procedures in place during fiscal years 2012 through 2016. Without management efforts to ensure the implementation of ATA’s revised procedures, ATA will lack reasonable assurance that its data collection efforts will improve data completeness and accuracy, and officials may not be able to accurately report the number of participants trained, in line with program performance indicators. State and DHS have acted on 10 documented participant unauthorized departures from ATA training activities in the United States since fiscal year 2012. Of the 10, 3 departed from their training facility during overnight hours in 2013; 6 fled during escorted class excursions, such as shopping trips, in 2014; and 1 absconded in 2016 during escorted transit from the airport to the training facility. After making their unauthorized departures, these 10 participants have pursued various courses of action. According to DHS data, 2 of the 10 departed the United States for countries other than their own home country, and 6 remain in the United States, having applied to DHS for asylum and been granted a work authorization while their asylum applications are adjudicated by DHS. According to DHS officials, none of these 8 former ATA participants are currently in violation of the terms of their admission to the United States, as they each have departed or have pending applications for an alternative immigration status. The ninth ATA participant, who made an unauthorized departure from an October 2014 training event in the United States, according to DHS, is believed to be in the United States without having applied for an alternative immigration status. ATA officials explained that, after discovering the participant’s absence, ATA notified DHS that a participant was missing. Officials told us that when DHS learns about this type of incident, DHS officials request notification from State that the participant in question is no longer entitled to A-2 status, which was predicated on their participation in State’s ATA training, as described previously. In this case, once DHS made this request and State determined that the ATA participant was no longer entitled to A-2 status, the participant became subject to potential removal from the United States. As of June 2017, according to DHS, the former participant remains the subject of an open investigation. DHS officials told us that they are taking proactive steps to locate the individual, who was not known to pose a threat to national security. As of September 15, 2017, we had not received requested information regarding the status of the tenth individual. When each of these 10 participants made unauthorized departures, the ATA program had standard operating procedures in place to direct officials’ actions in cases where a participant makes an unauthorized departure from training or during transit between the airport and training facility before and after training. However, ATA officials noted that the procedures were not always followed. Further, the procedures in place through 2014 did not specifically include sharing information with DHS. Our analysis of information related to the 9 documented unauthorized departures during fiscal years 2013 and 2014 indicates that in 3 cases, more than a year passed before relevant information was provided to the DHS unit responsible for investigating nonimmigrant visa holders who violate their immigration status. In January 2015, ATA revised these standard operating procedures to clarify the steps to be taken if a participant makes an unauthorized departure. For example, the revised procedures note that if a participant attending ATA training has been missing for 24 hours, ATA should contact the U.S. Regional Security Officer for the participant’s partner nation and notify DHS. ATA officials provided information to DHS on the same day that the aforementioned 2016 unauthorized departure occurred. Both training facilities we visited also had procedures providing guidance to their employees specifying how to respond to the unauthorized departure of an ATA participant. For example, the facilities’ procedures acknowledge that facility staff are not to restrain participants from departing facilities, because the terms of their admission to the United States do not restrict them from doing so. In addition, the Academi facility guidelines for delivering ATA training instruct employees to “contact any Academi ATA staff immediately” if any participants are missing during an outing. The O’Gara facility’s procedures for hosting international students note that “although O’Gara and our prime contractors work hard to ensure 100 accountability of all international students, they may still decide to prematurely depart training without notice or permission. When this occurs, O’Gara is required to immediately notify the respective prime contractor, and in turn, the associated . O’Gara’s role is an investigatory support role whereas we provide witness statements, lead instructor statements, copies of associated close circuit television camera’s footage and other information as required.” According to ATA officials, ATA’s oversight process for domestic training participants does not include confirming that participants return to their home countries to use their new skills, and the departure of some participants who completed their training is unconfirmed. ATA officials and staff at the training facilities we visited described their responsibilities for overseeing ATA participant departures to include escorting ATA participants to the airport, helping them check in for their flights, and escorting them to airport security. We spoke with Regional Security Officers who help oversee ATA activities in three partner nations, all of whom described informal follow-up processes with ATA participants, including those trained abroad, but none of whom used a systematic process to confirm the return of all participants trained in locations outside their home countries. ATA’s standard operating procedure for unauthorized departures does not cover this portion of a participant’s travel home. Prior to our review, ATA officials had not reviewed data to determine if any participants who completed training failed to leave the United States and return to their home country. In response to our inquiry, during fiscal year 2017, ATA identified 20 former ATA participants for whom DHS records do not indicate departures from the United States following the completion of their ATA training in fiscal years 2012 through 2016, as seen in figure 5. Following the initiation of our engagement, ATA officials requested from DHS all arrival and departure records for foreign nationals admitted to the United States in fiscal years 2012 through 2016 using A-2 visas, including ATA participants. ATA officials reported that they reviewed departure information for more than 69,000 A-2 visa holders recorded in ADIS to manually identify departure information for 2,773 ATA participants trained in the United States during that time period and included in electronic participant data. ATA’s analysis identified 20 participants for whom DHS data did not include departure records and who, therefore, might still be in the United States. ATA officials told us that they had asked the U.S. Regional Security Officer for the partner nation of 1 of these participants for any related information and that the officer had been unaware that the participant may not have returned from training. DHS information we requested for each of the 20 participants in question indicated that 1 had applied for an alternative immigration status, but DHS found no records of applications for immigration status changes for the remaining 19. Eleven of these 19 had been participants in the same fiscal year 2013 course. ATA officials noted that during their review of ADIS information, each of the 20 appeared to be “in legal status,” which DHS officials explained to us would remain the case for all nonimmigrants with A-2 status until DHS received a determination from State that any individuals in question were no longer entitled to A-2 status. As a result, a draft of this report provided to State in July 2017 included a recommendation that State provide information to DHS about former participants who may have remained in the United States following the completion of ATA training. After reviewing the draft report and recommendation, ATA formally notified DHS about such former participants in August 2017. State and DHS officials stated that A-2 status complicates the ability of DHS officials to independently identify individuals who remain in the United States and may warrant removal. DHS uses ADIS to maintain, among other things, entry and departure data for tracking immigrants and nonimmigrants and to facilitate the investigation of individuals who may have violated their immigration status by remaining in the United States beyond their authorized stay. DHS officials explained that for visitors admitted with an “admit until date,” DHS systems can alert officials that an individual who should have departed may not have complied with the terms of their admission. However, according to DHS officials, because all A-2 visa holders, including ATA participants, are admitted to the United States for duration of status without a specific admit until date, as previously described, there is no similar indicator in ADIS that an individual may have remained in the United States beyond their authorized stay. Instead, DHS would need other means to identify individuals with A-2 status, including ATA participants, who may warrant follow-up. State and DHS officials suggested that such identification could happen if DHS officials encounter the individual in the course of other activity or if someone tells DHS that the individual may no longer be eligible for A-2 status. For example, DHS officials told us that for some training programs sponsored by the Department of Defense whose participants are also admitted to the United States on A-2 visas, the agency asks U.S. military attachés stationed in partner nations to confirm participants’ return so that the department can notify DHS of any who do not. As noted previously, regardless of how DHS learns of such A-2 status individuals, State must issue an official determination that the individuals are no longer entitled to their A-2 status before DHS can begin removal proceedings. State officials involved in making these determinations noted that they would similarly not be aware of such individuals unless (a) someone familiar with the situation told them; or (b) DHS, having otherwise learned about such individuals, requested an official determination regarding their eligibility for A-2 status. ATA officials at headquarters and Regional Security Officers posted in two partner nations told us that while some domestic ATA participants engage in personal travel following the conclusion of training, they typically depart the United States immediately. Using a subset of 443 participants trained in the United States during fiscal years 2012 through 2016 for whom we could reliably determine departure dates, we found that 386 participants, or 87 percent, left the United States within 2 days following the completion of training, as shown in figure 6. The remaining 57 participants, or 13 percent, remained in the United States for 3 to 21 days before departing. The ATA program has no process for confirming that participants return to their home countries after completing training—either immediately or following personal travel—because there is no legal requirement that they do so, according to ATA officials. However, the Standards for Internal Control in the Federal Government state that agencies should design control activities such as policies, procedures, and mechanisms to achieve objectives and enforce management directives. In addition, a stated purpose of the ATA program is to enhance the antiterrorism skills of friendly countries by providing counterterrorism training and equipment. Without a process to confirm and document that ATA participants return to their home countries, ATA may not be able to assess the extent to which participants are making use of training to help detect, deter, and prevent acts of terrorism, in line with program goals. In addition, without some way to identify ATA participants who do not return home and, therefore, may have remained in the United States following the completion of ATA training, ATA may not be able to provide information to DHS about participants whose failure to depart may warrant enforcement action. Building partner capacity is a central focus of U.S. counterterrorism strategy, and the ATA program, for which State allocated more than $700 million in fiscal years 2012 through 2016, is among State’s primary mechanisms for accomplishing that goal. ATA has demonstrated a commitment to making improvements to the program with recent efforts such as correcting errors and omissions in historical participant data. However, we identified weaknesses in program data and participant oversight that may limit the effectiveness of program management. First, we found significant weaknesses in ATA program data. Officials told us that procedures for the collection of course and participant data have been inconsistently implemented. Although State revised these procedures in 2016, in light of the limited implementation of prior procedures, management review of related control activities could help ensure that revised procedures are properly implemented. Without data quality improvements, program managers may not have comprehensive or accurate information with which to confirm compliance with human rights vetting requirements, ensure participant compliance with the terms of their admission to the United States, and report on and assess the achievement of program goals. Second, ATA does not confirm that all participants trained in the United States or at regional training centers return to their home countries after training because it lacks a process to do so. ATA’s analysis of the available electronic participant data indicated that the vast majority of participants who received ATA training in the United States during fiscal years 2012 through 2016 departed following the completion of training. However, their analysis of that limited data also indicates that ATA had been unaware of at least 20 who may have remained in the United States. Without knowing whether all participants trained in the United States or at regional training centers return to their home countries to implement the skills they learned during ATA training, it may be difficult to accurately assess the effectiveness of program activities. In addition, without this information for those trained in the United States, it will be difficult for ATA to identify and provide information to DHS about participants whose unconfirmed departures may warrant enforcement action. We are making the following two recommendations to the Department of State: The Assistant Secretary of State for Diplomatic Security should take steps to ensure the implementation of revised standard operating procedures for collecting electronic ATA course and participant data. (Recommendation 1) The Assistant Secretary of State for Diplomatic Security should develop and implement a process to confirm and document whether future ATA participants return to their home countries following the completion of ATA training and, for any participants trained in the United States who do not, share relevant information with the Department of Homeland Security. (Recommendation 2) We provided a draft of this report, which included three recommendations, to the Departments of State and Homeland Security for comment. State provided written comments, which we have reprinted in appendix II, concurring with all of our recommendations. In response to the first recommendation, State noted ATA had revised its standard operating procedures for collecting data and shared the document with us. We will follow-up with ATA regarding steps taken to ensure the implementation of those procedures. In response to the second recommendation, State stated that, by the end of the year, it will implement a process to ensure that participants sent to ATA training in the United States return to their home countries. We will follow-up with ATA regarding the implementation of such a process for participants sent to ATA training in the United States or other locations outside of their home countries. Lastly, State noted that it had already implemented the third recommendation. Having received evidence that State had provided the relevant information to DHS, we removed this recommendation from the final report. The Department of Homeland Security provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees and to the Departments of State and Homeland Security. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6991 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. To determine what steps the Department of State (State) has taken to ensure that facilities used for Antiterrorism Assistance (ATA) training in the United States align with applicable facility and equipment security requirements, we analyzed the security requirements in the Global Antiterrorism Training (GATA) contract, which is used to secure third- party services to manage and deliver ATA training activities. We compared those requirements to documents, such as federal, state and local licenses, obtained from the contractors who implement GATA as well as to observations we made during investigative surveillance operations and escorted site visits to two domestic tactical training facilities used by ATA. We selected this nongeneralizable sample of two facilities because a significant proportion of ATA domestic students were trained there, and they were locations with courses that had equipment that needed to be secured on-site. We also reviewed Bureau of Alcohol, Tobacco, Firearms and Explosives regulations for explosives licenses and storage and suggestions for facilities that store and use explosives. Furthermore, we reviewed additional State and contractor documents related to facility and equipment security such as a survey and audits of the training facilities. We interviewed State program and contracting officials about their oversight of the GATA contract requirements. We also interviewed contractor officials to obtain information on how they comply with the GATA contract; federal, state and local licensing requirements; and other facility and equipment security measures they employ. To assess the extent to which State has vetted domestic ATA participants for human rights concerns, we reviewed Section 620M of the Foreign Assistance Act of 1961 (also known as the State Leahy law) and analyzed State documents establishing its policies and procedures for complying with that law and conducting human rights vetting. For example, we analyzed policies and procedures documented in State’s 2012 and 2017 Leahy vetting guides and State’s 2010 International Vetting and Security Tracking (INVEST) system user guide. Using the fiscal years 2012 through 2016 ATA participant data, we developed a generalizable random sample of 100 names from a population of 2,271 ATA participants who were trained in the United States and receive foreign assistance funding for which vetting is required, in accordance with State guidance. We then cross-checked the names in our sample with human rights vetting data from the INVEST system to verify that the ATA participants were vetted before receiving the training. For any participants for whom we could not readily confirm vetting, we worked with State’s Bureau of Democracy, Human Rights, and Labor (DRL) and ATA officials to identify additional supporting evidence to confirm that participants had been vetted before training was provided. After selecting our sample, through the process of following up with DRL and ATA officials, we discovered that our sample included one interpreter and one participant in the Special Program for Embassy Augmentation and Response (SPEAR) training—for whom human rights vetting would not have been required by State guidance, according to officials. ATA officials said that the interpreter should not have been included in the data because interpreters are not participants and that ATA would remove all interpreters from the participant data system. Officials also said that the misidentification of the SPEAR participant was the result of a data entry error in their system. Excluding these individuals reduced our sample size from 100 to 98. As discussed in this report, ATA’s participant data were incomplete and, therefore, we could only draw our sample from those participants trained in the United States for whom ATA had electronic records in its data system. We determined that the data available were sufficiently reliable (1) to identify participants who had taken courses in the United States and (2) that the data could be used to assess whether the participants for whom there were records in ATA’s participant data system had been appropriately vetted. However, we could not generalize our findings about vetting from this group for which ATA had records to those participants who were not recorded in its system. The confidence interval for our estimate that all 98 participants in our sample had been vetted to the full population of 2,271 recorded participants who were trained in the United States and for whom vetting was required is between 97 and 100 percent, with a 95-percent confidence level. To gather additional information on human rights vetting, we interviewed officials from ATA and DRL, which oversees human rights vetting in coordination with the regional and relevant functional bureaus. To describe how State screens participants for terrorist activity, we reviewed U.S. law that prohibits assistance from being provided to any country if the Secretary of State has determined that the government of that country has repeatedly provided support for acts of international terrorism. Those states for which the Secretary has made this determination are referred to as state sponsors of terrorism. We compared the countries on that list of state sponsors of terrorism to the countries for which ATA allocated funding in fiscal years 2012 through 2016 as well as the list of potential ATA partner nations as of fiscal year 2013. In addition, we interviewed State officials about their processes and embassy data systems used for screening potential ATA participants for terrorist activity. To examine the extent to which State has implemented data collection and program policies to promote oversight of ATA participants, we analyzed ATA participant and course data and Department of Homeland Security (DHS) arrival and departure data. We reviewed State, DHS, and contractor documents, including State’s report on its analysis of immigration exit records for ATA participants trained at U.S. facilities, and interviewed cognizant agency officials and contractors. With respect to reporting on the ATA participant and course data, our review of State’s response to our initial data request generated questions about the quality and completeness of the information provided. In response to our questions, State undertook an effort to review program participant data and provided us with a revised response. Using the revised data, we analyzed the extent to which the data included records for all participants ATA reported to have trained in fiscal years 2012 through 2016 as well as the extent to which data fields were populated. We also compared information in data fields that appeared in both ATA participant and course data systems to determine data accuracy and consistency. Further, we compared ATA data to data provided directly by the contractor that implemented the majority of ATA training during fiscal years 2012 through 2016 as an independent source of information with which to assess the accuracy and completeness of ATA’s participant and course data, particularly course dates used in other analyses. We reviewed information about the systems used to house the data and spoke with knowledgeable agency officials in Washington, D.C., and Dunn Loring, Virginia, responsible for the databases about agency processes for collecting the data and for ensuring data quality. While the data provided were sufficiently reliable for the purposes of documenting the extent to which State has implemented data collection processes to promote oversight of ATA participants, the data in the participant and course data systems are not comparable, and neither system contains complete and accurate records, as discussed in this report. In addition to reporting on these problems, we augmented a subset of records with date of birth information that allowed us to use DHS data to analyze domestic participant departures, as described below, but noted that the results for this subset are not generalizable to the universe of all ATA participants. In addition, we reviewed guidance included in ATA’s standard operating procedures for collection of participant data. The Standards for Internal Control in the Federal Government also state that agencies should clearly document transactions and all significant events, such as records of courses delivered and participants trained. Federal internal control standards also state that management should periodically review procedures and related control activities to determine that those activities are implemented appropriately. Furthermore, State’s Foreign Affairs Manual notes the importance of producing and maintaining adequate documentation of agency activities. State’s fiscal year 2014 Full Performance Plan Report identifies the “number of individuals in the security sector trained in counterterrorism knowledge and skills” as a performance indicator for the ATA program for fiscal years 2014 through 2017. With respect to reporting on ATA’s policies regarding unauthorized departures from training activities in the United States, we reviewed ATA documents regarding the 10 documented unauthorized departures and discussed these events with ATA officials and contractor staff at the facilities that hosted some of the participants who departed. We also discussed such events via teleconferences with U.S. embassy officials in three ATA partner nations—Bangladesh, Indonesia, and Jordan— selected based on criteria such as number of ATA participants trained and in light of countries included in recently completed or ongoing GAO and State Inspector General reviews of the ATA program. We also obtained and analyzed information from DHS regarding the departure and immigration status for 9 of these 10 participants. We analyzed ATA and contractor documents outlining procedures to be used if an ATA participant makes an unauthorized departure from training activities in the United States. Regional training facilities are outside the scope of this review. State officials told us they were unaware of any instances of unauthorized departure from regional training centers. With respect to reporting on ATA’s processes regarding participants who fail to return to their home country following training at domestic facilities, we discussed existing related policies and procedures with knowledgeable ATA officials. The Standards for Internal Control in the Federal Government state that agencies should design control activities such as policies, procedures, and mechanisms to achieve objectives and enforce management directives. A stated purpose of the ATA program is to enhance the antiterrorism skills of friendly countries by providing counterterrorism training and equipment. To conduct an analysis regarding the extent to which ATA participants trained at domestic facilities depart immediately following the completion of training, we identified 2,712 unique participant records among the 24,885 records ATA provided that were associated with fiscal years 2012 through 2016 training at domestic facilities. For 535 of these 2,712 participants, we were able to obtain dates of birth for DHS and GAO to use for data reliability purposes in identifying and analyzing related departure data, respectively. To identify birth dates, we used manual and automated processes to augment ATA participant records with date of birth information from other State systems to serve as a unique identifier for data reliability purposes. Of these 535, we determined the departure date for 443 using data from the DHS Arrival and Departure Information System (ADIS). To do so, we manually matched participant names in ATA data with names in DHS departure data using dates of birth to help ensure that ATA participant records and DHS departure data pertained to the same individual. As noted above, the results for this subset of 443 participants, for whom we could obtain birth dates and departure data, are sufficiently reliable to report on the length of stay in the United States after these participants completed training but cannot be generalized to the other 92 participants for whom we found birth dates but not departure records, or to the nearly 2,200 for whom we did not find birth dates, or to participants who were not included in ATA’s participant data system. Therefore, we cannot infer that all participants who were trained in the United States and subsequently departed did so following the patterns we report for this subset. For each of the 443 participant records for which we identified departure data, we used the date identified in the ATA participant data system as the final day of training and the departure date from ADIS to calculate the number of days that each participant remained in the United States following the conclusion of ATA training. In addition to our analysis of length of stay following the conclusion of training, we asked DHS to identify if any of the 535 participants for whom we identified dates of birth appeared in DHS systems used to manage applications for changes in immigration status and investigations of individuals who violated the terms of their admission to the United States. DHS did not find any exact matches for these 535 ATA participant names and dates of birth in related systems. We and DHS acknowledge that the searches for exact matches are limited for several reasons, including potential differences in the spellings of names translated from foreign languages. Regional training facilities are outside the scope of this review. State officials told us they were unaware of participants who did not return to their home countries following training at regional centers. With respect to our reporting on ATA’s analysis of the departure status of ATA participants trained at domestic facilities in fiscal years 2012 through 2016, we reviewed State’s report on the results of its analysis and discussed the analysis with knowledgeable State and DHS officials. We asked DHS to provide documentation confirming the status of the participants whom ATA identified who may have remained in the United States following the conclusion of ATA training and analyzed the information provided in response. We used ATA’s analysis and DHS’s additional information to provide insights into participants for whom there were no departure records. We noted that ATA’s analysis and results included only participants included in ATA’s electronic participant data, which we determined to be incomplete. We also provided information in the background section of this report about funds allocated to ATA activities. To do so, we assessed funding data, including allocations, obligations, and disbursements for fiscal years 2012 through 2016 from Nonproliferation, Anti-terrorism, Demining, and Related Programs (NADR) funding for ATA, NADR/ATA Overseas Contingency Operations, Global Security Contingency Fund, and International Narcotics Control and Law Enforcement accounts. State provided data on allocations, amounts reallocated, unobligated balances, unliquidated obligations, and disbursements of funds for program activities. We analyzed these data to determine the extent to which allocated funds had been disbursed. We also discussed the status of these funds, including the extent to which any had expired and were no longer available for obligation, with State officials. We assessed the reliability of these data by interviewing cognizant agency officials and comparing the data with previously published data. We determined that the data were sufficiently reliable for our purposes. We conducted this performance audit from May 2016 to September 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We conducted our related investigative evaluation work—site surveillance—in accordance with investigation standards prescribed by the Council of the Inspectors General on Integrity and Efficiency. In addition to the contact named above, Jason Bair, Kathryn Bolduc (Analyst-in-Charge), Ashley Alley, Kathryn Bernet, Debbie Chung, Martin de Alteriis, April Gamble, Rebecca Gambler, Rachel Girshick, K. Ryan Lester, Wayne McElrath, Ramon Rodriguez, Alex Welsh, Helina Wong, and Bill Woods made key contributions to this report. Neil Doherty also provided technical assistance.
State's ATA program aims to enhance foreign partners' capabilities to prevent acts of terrorism, address terrorism incidents when they do occur, and apprehend and prosecute those involved in such acts. In fiscal years 2012 through 2016, State allocated about $715 million to the ATA program, which it reports to have used to train about 56,000 security force officials from more than 34 partner nations. At least 2,700 of those participants were trained at facilities in the United States. GAO was asked to review ATA program management. This report examines, among other objectives, (1) State's ability to oversee ATA participants trained in the United States and (2) the steps State has taken to ensure that facilities used for domestic ATA training align with applicable security requirements. GAO conducted fieldwork at two domestic training facilities selected because they provide tactical training; analyzed State and DHS data and documentation related to fiscal year 2012 through 2016 domestic training participants; and interviewed State and DHS officials, including those who oversee ATA training for three partner nations receiving significant ATA training. GAO also interviewed contractors who help implement the ATA program and analyzed related documents. Weaknesses exist in Department of State (State) Antiterrorism Assistance (ATA) program data and oversight of participants, including those trained in the United States. ATA course and participant data are incomplete and sometimes inaccurate, despite ATA's procedures for the collection of those data. ATA officials told GAO that procedures were not always followed. Without ensuring the implementation of procedures to collect complete and accurate program data, officials may not be able to accurately report the number of participants trained, in line with program performance indicators. Among participants trained in the United States since 2012, ATA has documented 10 participant unauthorized departures from ATA activities and provided related information to the Department of Homeland Security (DHS) for follow-up. In addition to these 10, ATA recently identified 20 ATA participants trained in fiscal years 2012 through 2016 for whom departure from the United States following the completion of training is unconfirmed. ATA officials told GAO there is no formal process to confirm participants' return to their home countries following the completion of training (see fig.). Without such a process, ATA may not be able to assess the extent to which it is using training in line with program goals. Further, State may not be able to provide information to DHS about participants whose failure to depart may warrant enforcement action. State and the contractors who implement ATA training have taken steps to ensure that facilities used for domestic training align with applicable security requirements. State's ATA training contract requires the secure storage of weapons and explosives and that the contractors have the relevant federal, state, and local permits. State reports overseeing the contractors through the receipt of copies of relevant licenses such as those required for possessing explosives; visits to the training facilities, including surveys examining storage security; and frequent meetings. Both of the domestic tactical training facilities that GAO visited had relevant licenses and, during site visits, GAO observed some suggested security measures, including fences, secured gates, and security patrols. State should ensure implementation of its data collection procedures and establish a process to confirm and document participants' return to their home countries. State agreed with both recommendations.
As the Army transitions away from major wartime operations, it faces fiscal constraints and a complex and growing array of security challenges. The Army will be smaller and senior leaders recognize that the core of a smaller yet still highly capable force is having a capable tactical information network. Over the last decade, the Army focused most of its decisions to field network improvements on supporting operations in Iraq and Afghanistan, an effort that was both expensive and time consuming. The Army did not synchronize the development and fielding efforts for network technologies. Funding and time lines for network-related programs were rarely, if ever, aligned. The Army fielded capabilities in a piecemeal fashion and the user in the field was largely responsible for integrating them with existing technology. In December 2011, Army leaders finalized the Network-enabled Mission Command Initial Capabilities Document, a central document that describes the essential network capabilities required by the Army as well as scores of capability gaps. These capabilities support an Army mission command capability defined by a network of command posts, aerial and ground platforms, manned and unmanned sensors, and dismounted soldiers linked by an integrated suite of mission command systems. A robust transport layer capable of delivering voice, data, imagery, and video to the tactical edge (i.e., the forward battle lines) connects these systems. To achieve the objectives of its network modernization strategy, the Army is changing the way it develops, evaluates, tests, and delivers networked capability to its operating forces, using an approach called capability set management. A capability set is a suite of network components, associated equipment, and software that provides an integrated network capability. A requirement is an established need justifying the allocation of resources to achieve a capability to accomplish military objectives. Instead of developing an ultimate capability and buying enough to cover the entire force, the Army plans to buy only what is currently available, feasible, and needed for units preparing to deploy. Every year, the Army will integrate another capability set that reflects changes or advances in technology since the previous set. To support this approach, the Army is implementing a new agile process that identifies capability gaps and solicits solutions from industry and government to evaluate during the NIEs. NIEs are a significant investment. Since 2011, the Army has conducted five of them, and has projected the cumulative cost of the events at $791 million. The Army conducts NIEs twice a year. Each NIE typically involves around 3,800 soldiers and 1,000 vehicles, and up to 12,000 square kilometers of territory, and approximately 6 weeks in duration. The two categories of the key participating systems during the NIEs are Systems under Test (SUT) and Systems under Evaluation (SUE), and each is subject to differing levels of scrutiny. SUTs are from an ongoing acquisition program (sometimes referred to as a program of record) that are formally determined to be ready for operational testing in order to inform an acquisition decision. This operational testing is subject to review and is conducted with the production or production-like system in realistic operational environments, with users that are representative of those expected to operate, maintain, and support the system when fielded or deployed. SUEs are provided by either industry or the government. They are either (1) developing capabilities with sufficient technology, integration, and manufacturing maturity levels to warrant NIE participation; or (2) emerging capabilities that are seen as next generation war-fighting technologies that have the potential to fill a known gap or improve current capabilities. SUEs are not subject to formal test readiness reviews, nor the same level of testing as the SUTs. SUEs are operationally demonstrated and receive a qualitative user evaluation, but are not operationally tested and are not the subject of a formal test report (as SUTs are). Aside from their role in the agile process, NIEs also provide the Army with opportunities for integration, training, and evaluation that leads to doctrine, organization, training, materiel, leadership and education, personnel, and facilities recommendations; and the refinement of tactics, techniques, and procedures related to the systems tested. The Army believes that traditional test and evaluation processes frequently result in fielding outdated technologies and expects to improve on those processes through the NIEs. The Army’s test community members, including the Brigade Modernization Command (BMC) and the Army Test and Evaluation Command (ATEC), conduct the testing during the NIEs. The BMC is a headquarters organization within the Training and Doctrine Command. It has an attached operational 3,800-soldier brigade combat team dedicated to testing during the NIEs. BMC soldiers use systems during the NIE in simulated combat scenarios for testing and evaluation purposes, resulting in qualitative evaluations based on their observations. The BMC also recommends whether to field, continue developing, or stop developing each solution and to improve the integration of capabilities into deploying brigades. ATEC has overall responsibility for the planning, conduct, and evaluation of all Army developmental and operational testing. ATEC also produces a qualitative assessment of the overall performance of the current capability set of network equipment. Two test offices within the Office of the Secretary of Defense that help inform Defense Acquisition Executive decisions also provide oversight on testing related to major defense acquisition programs. The Director, Operational Test and Evaluation (DOT&E) provides oversight of operational testing and evaluation for SUTs. The Deputy Assistant Secretary of Defense for Developmental Test and Evaluation (DT&E) provides oversight of developmental testing that precedes operational testing of SUTs. DOT&E and DT&E roles are limited to the SUTs selected for operational testing during the NIEs. Test and evaluation is a fundamental aspect of defense acquisition. DOD, under its Defense Acquisition System, requires the integration of test and evaluation throughout the defense acquisition process to provide essential information to decision makers; assess attainment of technical performance parameters; and determine whether systems are operationally effective, suitable, survivable, and safe for intended use. Testers generally characterize test and evaluation activities as either developmental or operational. Developmental testing is a generic term encompassing modeling and simulation and engineering type tests that are used to verify that design risks are minimized, that safety of the system is certified, that achievement of system technical performance is substantiated, and that readiness for operational test and evaluation is certified. The intent of developmental testing is to demonstrate the maturity of a design and to discover and fix design and performance problems before a system enters production. Operational testing is a field test of a system or item under realistic operational conditions with users who represent those expected to operate and maintain the system when it is fielded or deployed. Specific operational tests include limited user tests, initial operational tests, and customer tests. Before operational tests occur for major acquisition programs, DT&E completes an independent Assessment of Operational Test Readiness. Each Assessment of Operational Test Readiness considers the risks associated with the system’s ability to meet operational suitability and effectiveness goals. This assessment is based on capabilities demonstrated in developmental testing. The Defense or Component Acquisition Executive considers the results of the Assessment of Operational Test Readiness, among other inputs, in making decisions on a major acquisition program proceeding to operational testing. The Army has made steady improvements in the NIE process since its inception and the evaluations continue to give the Army useful information and helpful insights into current and emerging networking capabilities. However, some resulting Army decisions are at odds with knowledge produced during the NIEs. Most importantly, despite poor operational test results for a number of SUTs during the NIEs, the Army has sought approval to buy additional quantities and field several major networking systems. While many of the SUEs received favorable reviews, the Army lacked a strategy that addresses a number of procurement barriers—such as funding availability and requirements—when it began the NIE process, which precluded rapid procurement of successful SUEs. Additionally, as we reported previously, the Army has not yet tapped into the potential to use the NIE to gain insight into the effectiveness and performance of the overall tactical network. To date, the Army has conducted five NIEs, costing an average of $158 million to plan and execute. Through those five NIEs, the Army has operationally tested 19 SUTs and evaluated over 120 SUEs. NIEs have helped the Army in a number of ways. The NIEs allowed the Army to formulate a network architecture baseline that will serve as the foundation upon which the Army plans to add networking capabilities in the future; evaluate industry-developed systems that may help address Army- identified capability gaps in the future; integrate the new capability sets into operational units and to create new tactics, techniques, and procedures for using the new systems in operations; and provide soldiers with an opportunity to both provide input into the designs of networking systems and to integrate the systems before the Army fields them to operational brigades. According to Army officials, testing during each NIE generates a large volume of potentially useful information. There are detailed operational test and evaluation reports for each of the SUTs, user evaluations for each of the SUEs, an integrated network assessment of the current capability set, and general observations on the NIE event itself. The DOT&E has reported observations of the NIEs in its fiscal years 2011 and 2012 annual reports, including an overall assessment, operational scenarios and test design, threat information operations, and logistics. According to DOT&E, the intended NIE objective to test and evaluate network components together in a combined event is sound, as is the opportunity to reduce overall test and evaluation costs by combining test events. NIEs also offer the opportunity for a more comprehensive evaluation of a mission command network instead of piecemeal evaluation of individual network components. In addition, the DOT&E generally reported overall improvements in the execution of the NIEs, realistic and well-designed operational scenarios, and improvements in threat information operations. ATEC, in addition to preparing operational test reports for specific systems, also prepares an integrated network assessment after each NIE. The reports attempt to characterize how well the current capability set performed with respect to several essential capabilities the Army needs for improved mission command. Based on the performance characterizations presented in the available reports for all NIEs, it appears the Army is making progress in improving its networking capabilities. For instance, the integrated network assessments for NIEs 12.2 and 13.1 cited improvements in an essential capability called network operations. These reports also showed improvements in the common operating picture, which is a capability that enables the receipt and dissemination of essential information to higher echelon command posts. As the Army has modified the reports to improve how they present both capability set performance and essential capabilities, the reports have become tools that are more useful for decision makers. Four SUTs that the Army plans to buy and field as part of capability set 13—Warfighter Information Network-Tactical (WIN-T) Increment 2, Joint Tactical Radio System Manpack Radio, Joint Tactical Radio System Rifleman Radio, and Nett Warrior—have demonstrated continued poor performance and/or reliability in both developmental tests before NIEs and operational tests during the NIEs. According to the DOT&E, system development best practices dictate that a system should not proceed to operational testing until it has completed developmental testing and corrected any identified problems. To address these problems, the Army has taken steps to implement design changes and schedule additional testing to verify performance after it has implemented those changes. However, in doing so, the Army faces the risk of making system design changes during the production phase or fielding systems with less than required performance or reliability. Two of these SUTs performed poorly during developmental testing. Developmental testers, through their Assessment of Operational Test Readiness reports, recommended that the Manpack Radio and the Rifleman Radio not proceed into operational testing. Despite these recommendations, the Army proceeded with initial operational testing for these systems during NIEs while reclassifying the participation of other systems as either limited user tests or customer tests. The outcomes were predictably poor, according to DOT&E. See table 1 for operational test results from ATEC and DOT&E reports. In its 2012 annual report, DOT&E pointed out that proceeding to operational testing only confirmed the deficiencies identified in developmental testing. For example, the WIN-T Increment 2 system’s reliability was troublesome enough in a limited user test to warrant a reduction in the reliability requirement. However, WIN-T Increment 2 was unable to meet the reduced requirement. The Rifleman Radio also demonstrated poor reliability during developmental testing in 2011 and even worse reliability in operational testing due to the enhanced stress of an operational environment. The DOT&E stated in its 2012 annual report that, according to system development best practices, the Army should not proceed to an Initial Operational Test and Evaluation with a system until it has completed developmental testing and the program has corrected any identified problems. Otherwise, the Army may conduct costly operational tests that simply confirm developmental testing conclusions about poor system performance and reliability rather than taking action to fix system shortfalls. Further, DOT&E’s 2012 annual report was critical of the Army’s NIE schedule-driven approach, which elevates meeting a schedule above adequately preparing a system to achieve success in operational testing. An event-driven approach, conversely, would allow systems to participate in a test event after the systems have satisfied certain criteria. Under the Army’s schedule-driven approach, the NIEs are held twice a year and SUTs must align their operational testing to coincide with the next available NIE. An event driven-approach—versus a schedule-driven approach—is the preferred method of test scheduling. Using a schedule-driven approach can result in fielding systems that do not provide adequate utility for soldiers and require costly and time-consuming modification in theater. In light of poor operational test results during previous NIEs, the Army now must pay for and conduct additional, unanticipated, tests to improve system performance and reliability. The extent to which the additional tests corrected all of the identified problems is unknown at this time as the Army awaits the results of the operational testing conducted at the most recent NIE. Ideally, the Army would demonstrate greater levels of operational effectiveness and suitability prior to making production and fielding decisions. Both GAO and DOT&E have acknowledged the risks of proceeding through testing, and to procurement, with systems that perform poorly. Such systems often require design changes that frequently happen when systems are already in production, which can be more costly and technically challenging. Table 2 summarizes the additional activities required of selected systems. In addition to the unplanned testing summarized in Table 2 above, several systems have operational test and evaluation events scheduled. See table 3. Despite the poor test results and unplanned activities intended to improve SUT performance, the Army has begun fielding SUTs for capability set 13, including WIN-T Increment 2, Joint Tactical Radio System (JTRS) Manpack radio, Rifleman Radio, and Nett Warrior. Without disputing the test findings and their implications, Army leadership indicates that this equipment addresses critical capability shortfalls and operational needs by providing some level of capability that is otherwise unavailable. For example, most deployed units previously had no or very limited capabilities other than voice communications. Consequently, the Army believes it is urgent to modernize deploying units as quickly as possible, with the equipment in capability set 13. The Army’s approach carries risk. DOT&E has indicated that the principal way of operating a less reliable system is to invest more in recurring maintenance, which will enable the system to function, but will add to the program’s life-cycle costs and increase its logistical support needs. As a result, the Army will likely have to work with a system that is less reliable than originally envisioned, and develop a new life-cycle cost estimate that reflects the added costs associated with the increased contractor support to keep this less reliable system operating. In addition, ATEC officials state that the negative impact of an individual system falling short of its reliability target is magnified in the capability set. This approach can result in fielded systems that do not provide adequate utility for soldiers and require costly and time-consuming modification in theater as well as additional testing. Our past work as well as reports from DOT&E and DT&E have all found benefits from adequate developmental testing prior to fielding to prove system performance. Since the first NIE in 2011, the Army has evaluated more than 120 SUEs from both industry and government, many of which have received positive reviews and recommendations for fielding from the soldiers. However, the Army has been unable to buy many of these systems because it did not have a strategy in place to rapidly buy promising technologies. Army officials explained that existing DOD acquisition processes would not allow the Army to quickly acquire SUEs that could immediately address networking capability gaps. Even so, Army officials did not develop alternative acquisition approaches before they began the NIE process. It is unclear how long industry will continue to participate in the NIEs if the Army is unable to begin buying systems. As discussed later in this report, the Army has now developed new approaches to address barriers to its ability to quickly buy and field SUEs that have successful demonstrations during the NIEs. Many SUEs have received positive reviews from soldiers at the NIEs— about five out of every six SUEs were recommended for fielding, field and continue development, or potential for follow-on assessment. Table 4 shows the range of soldiers’ recommendations. To date, the Army has decided to buy only three SUEs—a company command post, which is a collection of capabilities that enhances a company commanders’ ability to plan, execute, and monitor operations; a touch screen-based mission command planning tool; and an antenna mast. The Army will field only one of these systems in capability set 13— the company command post. While Army officials tell us they would like to buy more systems, a number of factors—such as available funds, deployment schedules, system maturity, and requirements—determine which systems they can buy and when they can buy them. Because it did not have a strategy during the NIEs to address these factors, the Army has been limited in its ability to buy successfully demonstrated SUEs. The Army expects industry participants to fund fully their own involvement and initial participation in the process and NIEs, which can be a costly endeavor. Army officials have said it can cost up to $250,000 for an interested contractor to provide a whitepaper for consideration. These whitepapers, which interested contractors submit to the Army in response to a sources sought notice, are the industry contractor’s first opportunity to explain both their system and how it addresses a particular capability gap. The Army releases a sources sought notice to industry to solicit candidate commercial solutions for network/non-network capability gaps and the notice informs potential responders of evaluation criteria and subsequent NIE participation criteria. Participation in later phases of the agile process, and ultimately the participation in a NIE can cost the contractor an estimated million dollars, depending on the system the Army is evaluating. Because of the limited number of successfully demonstrated SUEs that the Army has purchased to date, and the cost associated with industry participation, there is concern that industry may lose interest. This could be especially problematic for the Army’s agile process which, according to the Army, is heavily dependent on industry participation for success. Army officials remain confident in the continued support of industry, but the depth and longevity of this support is unclear at this time. While the NIEs are a good source of knowledge for the tactical network as a whole, the Army has not yet tapped into that potential. In January 2013, we reported the Army had not yet set up testing and associated metrics to determine how network performance has improved over time,which limited the evaluation of the cost-effectiveness of its network investments. After completing each NIE, ATEC has provided an integrated network assessment of how well the current capability set enables the execution of the mission command essential capabilities. This qualitative assessment includes only the impact of the current capability set—and not the entire network—on the essential capabilities and does not attempt to evaluate the cost-effectiveness of the current capability set. The Army and DOD consider the fielding of capability set 13 as the initial output from the Army’s network modernization portfolio, but the Army has yet to define fully outcome-based performance measures to evaluate the actual contributions of the capability set. Establishing outcome-based performance measures will allow the Army and DOD to assess progress of network development and fielding and be in a position to determine the cost-effectiveness of their investments in capability set 13. We recommended that, among other things, the Secretary of Defense direct the Secretary of the Army to define an appropriate set of quantifiable outcome-based performance measures to evaluate the actual contributions of capability set 13 and future components under the network portfolio. As discussed later in this report, DOD has started to develop metrics in response to our earlier recommendation. The Army is taking action to correct inefficiencies and other issues based on lessons learned from previous NIEs. The Army is also planning to address potential barriers to rapid procurement of successful SUEs, and DOD has started the process to implement our earlier recommendations on network metrics. Many of the initiatives are in the early stages of implementation so outcomes are not certain. The Army also has an opportunity to work more closely with the test community to further improve NIE execution and results. The Army has identified inefficiencies or less-than-optimal results in its network modernization and the NIE process and has begun implementing corrective actions to mitigate some of them. Table 5 shows some of the major issues identified by the Army and the corrective actions, which are in early stages of implementation. The Army’s lab-based risk reduction, currently under way, seeks to address concerns over too many immature SUEs sent to past NIEs. Through this initiative, the Army performs technology evaluations, assessments, and integration of vendor systems. Officials test systems individually and as part of an integrated network so that problems can be identified before proceeding to an NIE. In some cases, Army officials identify changes for these systems to increase the likelihood of their success during an NIE, while it drops others when they do not perform well enough in lab testing. Since this effort began, the Army has reduced the number of systems it evaluates during the NIEs, indicating the Army may be making soldiers’ NIE workloads more manageable. While Army officials acknowledge that lab-based risk reduction does not eliminate all risks, this early evaluation of new systems seems to address some concerns. It may reduce the number of immature systems in the NIE, which could help the Army train soldiers for the new systems. Sending only mature SUEs that have gone through integration testing to NIEs could also help avoid certain test costs. Additionally, to reduce costs, improve the results of NIEs, and better support rapid fielding of new network capabilities, the test community has reported on several issues requiring corrective action by the Army. Additionally, the testers have also taken actions to help reduce redundancies in test data collection processes, among other things. Implementation of these corrective actions, which testers identified during earlier NIEs, could help prevent negative impacts to NIE testing and modernization. Table 6 describes a number of major issues identified by the test community and corrective actions, which are in early stages of implementation. Most of the corrective actions to address test community concerns are in early stages of implementation. Below are additional details about the status of a few of the key initiatives. Army test officials anticipate avoiding $86 million in NIE costs due to implementation of a dozen different efficiency initiatives, including making NIEs more efficient by eliminating duplicative surveys, consolidating data systems, refining SUE test data delivery processes, reducing reliance on contractor data collectors by using military personnel more, and automating data collection. Additionally, BMC officials indicated they intend to incorporate additional testing and reduce the number of soldiers involved in future NIEs to help reduce testing costs. Over time, as the Army conducts NIEs more efficiently, it plans to reduce the number of test personnel, realize commensurate salary savings, and reduce engineering expenses. Training and guidance for soldiers using new systems during the NIE is another area receiving attention from the test community and the Army. Army test officials reported that there were gaps in soldier training for the SUEs to be evaluated in NIE 13.1. The training issues, in turn, affected the usefulness of the subsequent system evaluations. DT&E officials also expressed concerns about soldier training, and said problems exist in the rehearsal phase of the NIE process. Brigade Combat Team officials said they have also experienced a lack of training resources as they prepare to deploy overseas. According to Army officials, a lack of complete training information, tactics, techniques, and procedures is hampering soldier training on new network systems. That experience was somewhat mitigated, however, by help from soldiers who had used these systems during earlier NIE events. It will be important for the Army to resolve training issues before operational testers qualify systems as fully suitable for combat use following operational testing. Given that operations and support can often comprise about two-thirds of life-cycle costs, a good understanding of these requirements and costs will be necessary for the Army to make well-informed investment decisions for new equipment. Assessing and using lessons learned from experience can help in planning and implementing future activities. The Army’s efforts to reduce costs and implement corrective actions may take several years; therefore, a continued focus on making NIE processes more efficient and effective, as well as documenting the results of corrective actions would better support the Army’s business case for conducting future NIEs. The Army is developing a two-pronged approach to address barriers to its ability to quickly buy and field SUEs that have successful demonstrations during the NIEs. According to Army officials, these barriers included a lack of well-defined requirements for the network system (instead of the more general capability gaps); a lack of funding; and lengthy time frames needed to complete the competitive procurement process. The Army found that the processes for translating capability gaps into requirements, identifying specific funding, and completing a competitive procurement can be very time consuming and challenging. The Army is now developing a strategy to address these barriers. After the NIE, if the Army decides to buy and field a SUE, the Army plans to align that capability with a suitable existing requirement within an ongoing program of record. The selected program manager would then identify buying options for the capability, including the feasibility of using an existing contract, and would determine whether (1) funding is available, (2) the Army should identify the capability as an unfunded requirement, or (3) the Army needs an above-threshold reprogramming action. The program manager would also determine if the Army can buy and field the capability in the capability set or identify what capability is achievable. Army officials plan to implement this new strategy in the coming months. In cases where the Army cannot align the successful SUE with an existing program of record, it could develop a new requirement for the system. Army officials have indicated that in a small number of cases, the Army could utilize a directed requirement. The Army generally develops and approves directed requirements to fill urgent needs that the Army believes should be fielded as soon as possible. This allows for essentially bypassing the regular requirements processes, which require additional time to complete. In addition to this strategy, the Army has developed a new NIE acquisition plan that features an alternative means to buy successful SUEs rapidly. Under this new plan, the Army is using a combined sources sought notice and a request for proposals approach to better shape requirements and allow for buying SUEs in less time than under normal acquisition processes. With two NIEs per year, the Army will continue to use a sources sought notice to solicit government and industry solutions to broadly defined capability gaps and will assess those solutions during a NIE. Then, the Army will use lessons learned and soldier feedback from the first NIE to validate and refine the requirement and issue a request for proposal for participation in a future NIE. Using a request for proposal differs from using sources sought notices because the request for proposals approach culminates in the award of indefinite-delivery, indefinite-quantity contracts for industry SUEs to participate in a future NIE. Using an indefinite-delivery, indefinite-quantity contract allows the Army to place production orders for industry SUEs following the NIE. The Army released the first request for proposals supporting a NIE on December 20, 2012, to solicit vehicle tactical routers for NIE 14.1. Vehicle tactical routers would allow users and systems located nearby to access networks securely. For SUEs that already have a defined requirement, the Army plans to issue a request for proposals for participation in one NIE, without using a sources sought notice first. However, Army officials concede that a defined requirement is not usually available prior to the NIE. In those cases, the Army plans to continue issuing sources sought notices for industry proposed solutions that the Army will evaluate during a NIE, as a precursor to issuance of a request for proposals in the future. The Army expects to comply with current DOD acquisition policy when it decides to buy systems that proceed through the agile process. However, the Army may propose changes to existing policy and processes that inhibit realization of the full benefits of the agile process. As the Army implements this strategy over the coming months, it will be important to gather information on how well the strategy works and how rapidly the Army can procure and field a SUE after its successful demonstration during an NIE. At the same time, the Army will be in a better position to determine how much of its constrained budget it can devote to the procurement of SUEs. As recommended under internal control standards, it will be important for the Army to establish specific measures and indicators to monitor its performance and validate the propriety and integrity of those performance measures and indicators.This type of information—on how many SUEs the Army can buy and how rapidly—would be helpful for industry as it makes decisions on its future participation in the NIE process. In our initial report on the Army’s tactical network, we concluded that it will also be important for the Army to assess the cost effectiveness of Moreover, to individual initiatives before and during implementation.facilitate oversight, we concluded that it is important for the Army and DOD to develop metrics to assess the actual contributions of the initial capability set the Army will field in fiscal year 2013 and use the results to inform future investments. According to a key DOD oversight official reporting on Army networks to the Under Secretary of Defense, Acquisition, Technology, and Logistics, DOD has started work to define quantifiable outcome-based performance measures for the Army tactical network. In addition, both DOD and Army officials indicated they are planning to develop a preferred end-to-end performance projection for the Army tactical communications network and intend to quantify the performance needed in terms of voice, data, and so forth, and by network tier, sector, and subnet. Officials plan to define levels of performance for benign and conflict environments and the waveforms and radios soldiers will need for each tier as well as their specific performance characteristics. Although this effort is in its early stages, this DOD oversight official stated that it is expected that the NIE will generate data on performance of the network as a whole, which could then be compared to the expected performance demand. Separately, the Army is also beginning to prepare qualitative assessments of the progress the Army is making in filling capability gaps related to mission command essential capabilities. For example, ATEC has prepared an integrated network assessment after each NIE, which characterizes the level of capability achieved against the mission command essential capabilities. In addition, the Army has prepared a limited assessment of how capability set 13 will meet mission command essential capabilities. Once the performance measures are in place and the Army evaluates the delivered capabilities against those measures, the Army will have the tools to evaluate the progress it is making and make any necessary adjustments to its investment strategy. The Army’s network strategy features a variety of different approaches to testing and evaluation to accommodate the rapid pace of technology change and to reduce the cost and time of acquisition. The Army has worked closely with the test community to plan, conduct, and evaluate the NIEs. Also, as mentioned earlier, the test community has taken a number of actions to reduce the costs of planning and executing the NIEs. At the same time, the test community has been meeting its responsibility to objectively report on the tests and the results. However, test results for several network systems at the NIEs that did not meet operational and other requirements will result in added time and expense to address identified issues. An inherent value of testing is pointing out key performance, reliability, and other issues that need to be addressed as soon and as economically as possible, but not after fielding. DOT&E has stated that the schedule-driven nature of the NIEs contribute to systems moving to testing before they have met certain criteria. Tension between the acquisition and testing communities has been long- standing. In that regard, the Defense Acquisition Executive recently chartered an independent team to assess concerns that the test community’ approach to testing drives undue requirements, excessive cost, and added schedule into programs and results in a state of tension between program offices and the testing community. One area the Defense Acquisition Executive assessment identified for improvement was the relationship and interaction among the testing, requirements, and program management communities. In that regard, the memorandum reporting the results called attention to four specific issues those communities need to address the need for closer coordination and cooperation among the requirements, acquisition, and testing communities; need for well-defined testable requirements; alignment of acquisition strategies and test plans; and need to manage the tension between the communities. Concurrently, a systematic review of recent programs by DOT&E and DT&E examined the extent to which testing increases costs and delays programs. The results of both efforts indicated that testing and test requirements by themselves do not generally cause major program delays or increase costs. In addition, the Defense Acquisition Executive found no significant evidence that the testing community typically drives unplanned requirements. Further, according to the DOT&E fiscal year 2012 annual report, three specific areas exist where increased test community interactions could result in improved test outcomes, which can result in systems with needed and useful combat capability being delivered to our forces more quickly. These include developing mission-oriented metrics to evaluate each system within the context within which it will operate; leveraging test and evaluation knowledge in setting requirements; and evaluating the multiple conditions in which the system is likely to be operated. Additional opportunities exist for leadership of the Army and the test community to work together to further improve NIE execution and results. A good starting point would be for the Army to consider addressing the test community observations and recommendations from previous NIEs. Those included the schedule driven nature of NIEs, the lack of well- defined network requirements, and the lack of realistic battlefield maintenance and logistical support operations for SUTs during the NIEs. The Army is not required to and has not directly responded to the test community about its NIE observations and recommendations. Nevertheless, per internal control standards, managers are to, among other things, promptly evaluate findings from audits and other reviews, including those showing deficiencies and recommendations reported by auditors and others who evaluate agencies’ operations. In doing so, the Army may not only improve NIE execution and results but also reduce the tensions with the test community. Within a sizable investment of an estimated $3 billion per year to modernize its tactical network, the Army is investing over $150 million per NIE to help ensure that those planned development and procurement investments result in the expeditious delivery of increased capabilities to the warfighter. The main product of the NIEs is knowledge. The Army has not consistently recognized, accepted, and acted upon the knowledge gained from the NIEs. On the one hand, the Army’s fielding decisions to date seem driven by a pre-determined schedule rather than operational test results. Fielding individual systems that have done poorly during operational tests carries risk of less than optimal performance with the potential of costly fixes after fielding and increased operating and sustainment costs. Moreover, performance and reliability issues of individual systems could be magnified when these systems become part of an integrated network. On the other hand, even with a new strategy for procurement of emerging capabilities to fill capability gaps, the Army may still face an expectation gap with industry. The current constrained budget environment and the level of funding already allocated to ongoing network acquisition programs, may leave little funding to procure new networking technologies. Until it has clearly demonstrated the means to rapidly buy and field emerging capabilities and provided this information to industry, the Army may need to manage industry expectations of how many new networking systems it can buy and how rapidly. The Army has implemented some lessons learned from planning and executing the NIEs. However, as part of a knowledge-based approach to its broader network modernization strategy, the Army should also be open to consideration of observations from all sources to improve process efficiency and achieve improved outcomes. We believe that the Army can and should collaborate more extensively with the test community on a variety of issues that could improve NIE outcomes. For example, as part of its responsibility to objectively conduct tests and report on their results, the test community has provided reports, observations, and recommendations before and following NIEs. To date, the Army has not directly responded to the test community’s observations and recommendations on the NIEs. To improve outcomes for its entire network modernization strategy, we recommend that the Secretary of Defense direct the Secretary of the Army to take the following four actions: Require that network systems from major defense acquisition programs obtain a positive Assessment of Operational Test Readiness (now called a Developmental Test and Evaluation Assessment) recommendation before being scheduled for operational testing during the NIE; Correct network system performance and reliability issues identified during the NIEs before moving to buy and field these systems; Provide results to industry on the Army’s actual experience in buying and fielding successfully demonstrated systems under evaluation and the length of time it has taken to date; and Collaborate with all network stakeholder organizations to identify and correct issues that may result in improved network outcomes, including addressing the observations and recommendations of the test community related to the NIEs. DOD’s written response to this draft is reprinted in appendix II. DOD also provided technical comments that were incorporated as appropriate. DOD partially concurred with our recommendations that the Army (1) require network systems obtain a positive Assessment of Operational Test Readiness (now called a Developmental Test and Evaluation Assessment) recommendation before being scheduled for operational testing during the NIE and (2) correct network system performance and reliability issues identified during the NIEs before moving to buy and field these systems. In both cases, DOD states that processes are already in place to address these issues and that the recommendations as written take flexibility away from the Department. We disagree. Our findings indicate that DOD is not using its current processes effectively to evaluate a system’s readiness to begin operational testing. While there may be instances where the Army uses operational testing to obtain feedback on system performance, DOD’s system development best practices dictate that a system should not proceed to operational testing until it has completed developmental testing and corrected any identified problems. The NIEs are a good forum for the Army to generate knowledge on its tactical network. However, NIEs are a large investment and DOD and the Army should strive to optimize their return on that investment. Approving network systems for operational testing at the NIEs after having poor developmental test results may not be the best use of NIE resources because of the strong correlation between poor developmental test results and poor operational test results. Moreover, it is much more cost effective to address performance and reliability issues as early as possible in the system development cycle and well in advance of the production and fielding phases. As we note in the report, DOD and the Army have been pursuing a schedule-based strategy for network modernization rather than the preferred event-based strategy where participation in a test event occurs after a system has satisfied certain criteria. DOD concurred with our recommendation that the Army provide results to industry on how many successfully demonstrated systems under evaluation have been procured to date and how long it has taken for the procurements. However, DOD did not offer specific steps it would take to provide this information or a proposed timeframe. Because of the importance of continued industry participation in the development of the Army network, we think that it is important for industry to have a clear picture of the Army’s success in rapidly buying and fielded emerging technologies. Finally, DOD concurred with our recommendation that the Army collaborate with all network stakeholder organizations to identify and correct issues that may result in improved network outcomes, including addressing the observations and recommendations of the test community related to the NIEs. DOD states that a collaborative environment with all stakeholders will assist in identifying and correcting issues and that the forum for doing so is the semiannual Network Synchronization Working Group. We agree that a collaborative environment is important in responding to previous test community observations and recommendations and would expect the Working Group to address these issues. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Secretary of the Army, and other interested parties. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Belva Martin at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Our objectives were to evaluate (1) the results of the Network Integration Evaluations (NIE) conducted to date and identify the extent to which the Army has procured and fielded proposed network solutions; and (2) Army actions and additional opportunities to enhance the NIE process. To address these objectives, we interviewed officials from the Army’s System of Systems Integration Directorate; the Deputy Chiefs of Staff, G-3/5/7 and G-8; the Army Brigade Modernization Command, and the Army Test and Evaluation Command. We met with representatives of Army Brigade Combat Teams preparing for deployment. We also interviewed officials from the Deputy Assistant Secretary of Defense for Developmental Test and Evaluation; the Director, Operational Test and Evaluation; and the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics. We visited the Lab Based Risk Reduction facility at Aberdeen Proving Ground, Maryland and the NIE test site at White Sands Missile Range, New Mexico to meet with soldiers and civilian officials conducting testing. To examine the results of NIEs conducted to date, we attended Network Integration Evaluations and reviewed test reports from the Brigade Modernization Command, U.S. Army Test and Evaluation Command, the Director of Operational Test and Evaluation, and the Deputy Assistant Secretary of Defense for Developmental Test and Evaluation. We reviewed briefing presentations for Army leadership that discuss test results and recommendations, and we toured lab facilities to understand how the Army is validating and selecting technologies for network evaluations. We reviewed Army programmatic and budget documentation to understand cost projections for testing and procuring network equipment under the new approach and we reviewed Army plans for resourcing this approach. To identify actions and opportunities to enhance the NIE process, we interviewed Army officials to identify other networking challenges the Army is addressing concurrent with implementation of the agile process. We reviewed test results from both the Army and Department of Defense. We reviewed Army documentation identifying cost avoidance opportunities. We reviewed briefing information regarding lessons learned from activities related to the NIE, such as the screening and lab testing of candidate systems and soldier training. We spoke with officials at both Army and Department of Defense knowledgeable of lessons learned for the testing and fielding of new network capabilities. We conducted this performance audit from September 2012 to August 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, William R. Graveline, Assistant Director; William C. Allbritton; Marcus C. Ferguson; Kristine Hassinger; Sean Seales; Robert S. Swierczek; and Paul Williams made key contributions to this report.
In 2011, the Army began a major undertaking to modernize its tactical network to improve communication and provide needed information to soldiers on the battlefield. The Army has identified the network as its number one modernization priority requiring approximately $3 billion per year indefinitely. NIEs provide semi-annual assessments of newly developed systems. Given the importance of the network, GAO was asked to examine elements of the process the Army is using to acquire network capabilities. This report examines (1) the results of the NIEs conducted to date and the extent to which the Army has procured and fielded network solutions, and (2) Army actions to enhance the NIE process. To conduct this work, GAO analyzed key documents, observed testing activities, and interviewed acquisition and testing officials. Since 2011, the Army has conducted five Network Integration Evaluations (NIE), which have provided extensive information and insights into current network capabilities and potential solutions to fill network capability gaps. According to senior Department of Defense (DOD) test officials, the NIE objective to test and evaluate network components together in a combined event is sound, as is the opportunity to reduce overall test and evaluation costs by combining test events. Further, the NIEs offer the opportunity for a more comprehensive evaluation of the broader network instead of piecemeal evaluation of individual network components. However, the Army is not taking full advantage of the potential knowledge that could be gained from the NIEs, and some resulting Army decisions are at odds with knowledge accumulated during the NIEs. For example, despite poor results in developmental testing, the Army moved forward to operational testing for several systems during the NIEs and they demonstrated similarly poor results. Yet the Army plans to buy and field several of these systems. Doing so increases the risk of poor performance in the field and the need to correct and modify deployed equipment. On the other hand, the Army has evaluated many emerging network capabilities--with generally favorable results--but has bought very few of them, in large part because it did not have a strategy to buy these promising technologies. Army officials have stated that the success of network modernization depends heavily on industry involvement but, with few purchases, it is unclear whether industry will remain interested. Finally, the Army has not yet developed metrics to determine how network performance has improved over time, as GAO recommended in an earlier report. The Army has several actions under way or planned to enhance the NIE process and has further opportunities to collaborate with the test community. The Army has identified issues in the NIE process and its network modernization strategy that were causing inefficiencies or less-than-optimal results and has begun implementing actions to mitigate some of those issues. For example, the Army has begun performing technology evaluations, and integration of vendor systems in a lab environment to weed out immature systems before they get to the NIE. The Army has also developed a strategy and has an acquisition plan to address requirements, funding, and competition issues that will help enable it to buy emerging capabilities rapidly. However, the Army will need to validate the new strategy and plan and provide results to industry, which could help to manage industry expectations about how many of and how quickly it can buy these capabilities. DOD has started to identify and evaluate network metrics and to re-focus NIEs to gather additional data and insights. Taking these actions will ultimately allow the periodic review and evaluation of the actual effectiveness of network capabilities and the likely effectiveness of proposed investments. The test community has worked closely with the Army on the NIEs but has also voiced various concerns about the NIEs including their being a schedule-driven event. Tension between the acquisition and test communities has been long-standing. Additional opportunities exist for Army leadership and the test community to work together to further improve NIE execution and results and to reduce tensions between the two communities. A good starting point for the Army would be to take a fresh look at the test community observations and recommendations from previous NIEs. To improve outcomes for the Army’s network modernization strategy, GAO recommends that the Secretary of Defense direct the Army to (1) require successful developmental testing before moving to operational testing at an NIE, (2) correct issues identified during testing at NIEs prior to buying and fielding systems, (3) provide results to industry on Army’s efforts to rapidly acquire emerging capabilities, and (4) pursue additional opportunities for collaboration with the test community on the NIEs. DOD agreed with the recommendations to varying degrees, but generally did not offer specific actions to address them. GAO believes all recommendations remain valid.
Human trafficking—the worldwide criminal exploitation of men, women, and children for others’ financial gain—is a violation of human rights. Victims are often lured or abducted and forced to work in involuntary servitude. Although the crime of human trafficking can take different forms in different regions and countries around the world, most human trafficking cases follow a similar pattern. Traffickers use acquaintances or false advertisements to recruit men, women, and children in or near their homes, and then transfer them to and exploit them in another city, region, or country. The U.S. government defines severe forms of trafficking in persons to include the recruitment, harboring, transportation, provision, or obtaining of a person for labor or services, through the use of force, fraud, or coercion for the purpose of subjection to involuntary servitude, peonage, debt bondage, or slavery. International organizations have also defined trafficking in persons and developed a list of indicators of trafficking for labor exploitation. Appendix II describes these efforts in more detail. Congress and others have highlighted the role that deceptive recruitment practices can play in contributing to trafficking in persons. Workers who pay for their jobs are at an increased risk for human trafficking and other labor abuses. State’s Inspector General has reported that such recruitment fees, which can amount to many months’ salary, are a possible indicator of coercive recruitment and may indicate an increased risk of debt bondage, as some workers borrow large sums of money to pay the recruiter. A 2011 ILO survey of workers in Kuwait and the United Arab Emirates found that the recruitment fees and interest on loans may limit workers’ ability to negotiate the terms of their work contracts. This debt burden can result in involuntary servitude through excessive work hours or virtually no pay for months to recover the advance payments of fees and interest. Since 2007, the Federal Acquisition Regulation (FAR) has required all U.S. government contracts to include a clause citing the U.S. government’s zero tolerance policy regarding TIP. This clause prohibits contractors from engaging in severe forms of trafficking, procuring commercial sex acts, or using forced labor during the period of performance of the contract. In addition, this clause establishes several contractor requirements to implement this policy, such as notifying the contracting officer of any information that alleges a contractor employee, subcontractor, or subcontractor employee has engaged in conduct that violates this policy and adding this clause in all subcontracts. In 2012, Congress and the President took further steps to reduce the risk of trafficking on U.S. government contracts. The TVPA, as amended, and an executive order both address acts related to TIP, such as denying foreign workers access to their identity documents and failing to pay for return travel for foreign workers. In September 2013, amendments to the FAR were proposed to implement the requirements of the 2013 amendments to the TVPA and the executive order related to strengthening protections against trafficking in persons. As of October 2014, these proposed amendments to the FAR are still under review. Agencies also have developed their own acquisition policies and guidance to augment the FAR that aim to protect foreign workers on specific contracts. Many of these policies and much of this guidance include requirements related to recruitment and other labor practices, including housing, wages, and access to identity documents. DOD policy is intended to deter activities of a variety of actors, including contractor personnel, that would facilitate or support TIP. A region-specific DOD acquisition policy that addresses combating TIP has evolved in recent years and has applied to different places of performance at different times. Currently, this policy requires the insertion of a clause to combat TIP into certain service and construction contracts that require performance in Iraq or Afghanistan. In 2011 and 2012, State issued acquisition guidance, applicable to all domestic and overseas contracting activities, on how to monitor contracts for TIP compliance and to provide a clause and procedures to reduce the risk of abusive labor practices that contribute to the potential for TIP. Among other things, this guidance requires contracting officers to require offerors to include information related to the recruitment and housing of foreign workers in their proposals for certain contracts. In 2012, USAID issued guidance reminding contracting officials of their responsibilities to implement TIP requirements and requiring officials to, among other things, discuss issues such as access to certain documents and understanding local labor laws with contractors following contract award. Subcontracting is an acquisition practice in which the vendor with the direct responsibility to perform a contract, known as the prime contractor, enters into direct contracts with other vendors, known as subcontractors, to furnish supplies or services for the performance of the contract. This practice can help contractors to consider core competencies and supplier capabilities to achieve efficiencies from the marketplace. In some cases, prime contractors use subcontractors to supply labor on government contracts, and these subcontractors may use second-tier subcontractors or recruitment agencies to identify prospective employees. Our prior work has shown that government visibility into subcontracts is generally limited. Government agencies have a direct relationship only with the prime contractor, and generally “privity of contract” limits the government’s authority to direct subcontractors to perform tasks under the contract. As a result, agencies generally do not monitor subcontractors directly, as they expect the prime contractor to monitor its subcontractors. Further, the FAR notes the prime contractor’s responsibility in managing its subcontractors, and officials have underscored the limited role of the government in selecting and managing subcontracts. Contractors performing U.S. government contracts overseas operate under local conditions and in accordance with local labor practices. In Gulf countries, contractors employ large numbers of foreign workers, who make up a significant portion of the local labor force. These workers typically come from countries such as India, Bangladesh, and the Philippines for economic reasons. According to State and the ILO, several common and restrictive labor practices in Gulf countries stem from these countries’ sponsorship system, which limits workers’ freedom of movement. Appendix III provides further detail on the prevalence of foreign workers in Gulf countries, as well as efforts to regulate this migration of workers in home and destination countries. State’s 2014 Trafficking in Persons Report found that certain labor practices in the Middle East, including Kuwait, Qatar, and Bahrain, can render foreign workers susceptible to severe forms of trafficking in persons. In addition, U.S. government contractors in Iraq and Afghanistan often employ foreign workers for cost and security reasons. As of July 2014, DOD reported nearly 17,000 foreign workers on contracts in Afghanistan, approximately one-third of the department’s total contractor workforce in that country. Although DOD reports that it no longer has foreign workers in Iraq, it reported more than 40,000 foreign workers on DOD contracts in Iraq—nearly 60 percent of its total contractor workforce in the country—as of January 2011. State contractors currently employ foreign workers in Iraq for security and operations and maintenance services. GAO and others have reported that operating in insecure environments can hinder agencies’ ability to monitor contracts, including efforts to combat TIP, because of the general absence of security, among other factors. Table 2 shows the prevalence of migrants in Gulf countries, Afghanistan, and Iraq, as well as State’s Trafficking in Persons Report tier placement for 2014, which illustrates areas where the risk of TIP is high. Agency policy and guidance on combating trafficking in persons has attempted to address the payment of recruitment fees by foreign workers on certain U.S. government contracts. However, current policy and guidance does not specifically define the components or amount of permissible fees related to recruitment. Agency officials and contractors said that without an explicit definition of what constitutes a recruitment fee, they may not be able to effectively implement existing policy and guidance on this issue. Despite efforts to prohibit or restrict the payment of recruitment fees, we found that some foreign workers on U.S. government contracts have reported paying for their jobs. Prime contractor reliance on subcontractors for recruitment of foreign workers further limits visibility into recruitment fees. The FAR provides broad prohibitions against contractors engaging in trafficking but does not explicitly address the payment of recruitment fees. The FAR prohibits contractors from engaging in severe forms of trafficking, including recruitment of a person for labor or services through the use of force, fraud, or coercion for the purposes of subjection to debt bondage, but it does not address more specific issues related to how contractors recruit foreign workers, such as recruitment fees. Some agencies have developed policy and guidance that address certain recruitment issues more specifically. Although DOD’s department-wide guidance on combating TIP does not explicitly address recruitment, its current region-specific policy requires certain services and construction contracts in Afghanistan to include a clause requiring contractors to avoid using unlicensed recruitment firms or firms that charge illegal recruitment fees. However, this policy does not define “illegal” recruitment fees. State’s 2012 guidance required certain contracts to include a clause requiring contractors to submit, as part of their proposals, recruitment plans that must state that employees will not be charged any recruitment or similar fees and that contractors and subcontractors will use only bona fide licensed recruitment companies. USAID’s 2012 guidance on combating trafficking in persons reminds officials of the FAR requirements, but it provides no further guidance on the recruitment of foreign workers for work on USAID contracts. Table 3 illustrates how the FAR and agency policy and guidance address recruitment fees with varying levels of specificity. We found that some agency officials, both on contracts in our sample and on others, and contractors in our sample did not have a common understanding of what constitutes a permissible fee related to recruitment—in terms of components or amount—or whether contractors or subcontractors at any level were permitted to charge such fees to recruited employees. According to GAO’s standards for internal control, information should be recorded and communicated to management and others in a form that enables them to carry out their internal control and other responsibilities. Currently, agency contracting officials lack policy or guidance that specifies what components are considered to be recruitment fees and may not be able to determine which fees are permissible, hindering their ability to carry out their responsibilities. For example, neither the FAR nor agency policy or guidance specifies what components are considered to constitute recruitment fees, but these fees could include air tickets, lodging, passport and visa fees, or medical screening, among other expenses. One subcontractor who hires foreign workers in Dubai for work in Afghanistan said that the definition of recruitment fees is imprecise and varies widely within the contracting community. He added that he believed every foreign worker hired in Dubai for this contract had paid someone some type of fee for his or her job, but the fee could have included airfare from the home country to Dubai, housing and food in Dubai, or a commission for the recruiter, any of which may be legal. In addition, the Qatari Under Secretary of Labor noted that Qatari law prohibits recruitment fees, but he and State and DOD officials in Qatar acknowledged that most foreign workers are initially recruited in their home countries, where such fees may or may not be allowed. Without explicit definitions of what, if anything, constitute permissible fees related to recruitment, contractors said that they could not ensure that they were in compliance with contractual requirements. DOD contracting officials in Kuwait said that a definition of recruitment fees would improve their ability to implement the government’s antitrafficking policy. The President and Congress have both directed that the FAR be amended to address several issues related to trafficking, including the payment of recruitment fees. The 2012 executive order directed amendments to the FAR that would expressly prohibit federal contractors from charging employees any recruitment fees, while amendments to the TVPA in 2013 allow the government to terminate a contract if contractors, subcontractors, labor brokers, or other agents charge unreasonable placement or recruitment fees. Public comments on the proposed FAR rule have noted that the FAR Council will have to reconcile these prohibitions, deciding whether to prohibit all recruitment fees or only unreasonable ones and defining what is considered unreasonable. The TVPA states that unreasonable placement or recruitment fees include fees equal to or greater than the employee’s monthly salary, or recruitment fees that violate the laws of the country from which an employee is recruited. The Department of Labor and foreign governments have also defined permissible fees paid by workers to their employers. A Department of Labor regulation relating to assurances that employers must provide in seeking to employ certain temporary foreign workers in the United States allows for reimbursements from workers for costs that are the responsibility and primarily for the benefit of the worker, such as government-required passport fees. The government of India permits recruiting agents to recover service charges from workers of up to the equivalent of 45 days’ wages, subject to a maximum of 20,000 rupees (currently about $325), according to India’s Ministry of Overseas Indian Affairs. The government of the Philippines permits recruiters to charge its hired workers a placement fee in an amount equivalent to 1 month’s salary, excluding documentation costs such as expenses for passports, birth certificates, and medical examinations, according to the Philippine Overseas Employment Administration. The International Organization for Migration, with others, has established the International Recruitment Integrity System (IRIS), which is a voluntary consortium of stakeholders including recruitment and employment agents. Members of IRIS are prohibited from charging any recruitment fees to job seekers. Senior acquisition officials expressed conflicting views regarding the feasibility of prohibiting recruitment fees. Senior officials in State’s Office of the Procurement Executive stated that they would prefer that all recruitment fees be prohibited, in line with the executive order, to eliminate any uncertainty or ambiguity among contracting officials. They said that they had consulted State’s Office to Monitor and Combat Trafficking in Persons, which took the same position of the Office of the Procurement Executive given that such fees create vulnerability among workers and often are the precursor to debt bondage. These officials further stated that the term “reasonable recruitment fees” was difficult to define and apply in practice. In public comments on the proposed FAR rule, one nongovernmental organization noted that “the definition of reasonableness is amorphous and is unduly burdensome on private industry to enforce.” Other officials, including DOD contracting officials in Kuwait, stated that it may be reasonable for employees to pay a fee for a job in some cases, echoing the recent amendment to the TVPA. They noted that eliminating these fees would be nearly impossible, and that recruiters would pass these fees on to workers in some other form if recruitment fees were explicitly banned. DOD Joint Staff officials added that the elimination of recruitment fees will cause contractors to change the name from recruitment fees to travel or per diem fees to cover air travel, housing and food, thus a precise definition of these fees that specifically addresses travel costs is needed. A subcontractor supplying foreign workers on the largest contract in our sample said that its recruitment agencies likely charge fees to recruits for services such as air tickets, housing, and food, which the subcontractor deemed reasonable. However, according to senior State, DOD, and contractor officials, regardless of whether recruitment fees are banned in their entirety or only when unreasonable, the ability of contracting officers and contractors to implement this restriction will be limited until recruitment fees, including what is considered permissible, are defined in regulation, guidance, or policy. On at least two contracts in our sample, including the one employing the largest number of foreign workers, contractors reported that workers have paid for their jobs. We found that, on the largest contract in our sample, employing nearly 10,000 foreign workers in Afghanistan, recruitment agencies have likely charged fees to some foreign workers. On this contract, the prime contractor uses several subcontractors to supply labor, including one that hires workers through more than 10 recruitment agencies in Dubai. We found that from September 2012 through April 2014, more than 1,900 subcontractor employees reported to the prime contractor that they had paid fees for their jobs, including to recruitment agencies with which the subcontractor had a recruitment agreement. For 2012 and 2013, recruitment agencies used by this subcontractor signed statements acknowledging that they would not charge any recruitment fees to candidates facilitated as part of their agreements with the subcontractor. In April 2014, the last month for which data were available, 82 workers reported having paid an average of approximately $3,000 to get their jobs. The fees that these workers reported paying averaged approximately 5 months’ salary and, in one case, amounted to more than 1 year’s salary. According to the subcontractor who employed all 82 of these workers, these fees were likely paid to an agent who assisted foreign workers with transportation and housing prior to being hired for work on the U.S. government contract. Although the prime contractor provided DOD information about reported fees, neither DOD nor the prime contractor took further action because the allegations did not involve the prime contractor or its subcontractor. On another DOD services contract in Afghanistan, we found that the contractor modified its subcontract with a recruiter in January 2014 to clarify that the contractor would pay recruitment fees previously charged to foreign workers—$670 per worker. The contractor reported that this modification to the subcontract was a result of its interpretation of the FAR clause prohibiting TIP. According to the contractor, the contracting agency had not directed it to make this modification; it made this change on its own initiative to prevent potential TIP abuses in the performance of the contract. These practices may be long-standing and widespread in Gulf countries. In January 2011, the State Inspector General reported in an evaluation of efforts to combat TIP on contracts in four Gulf countries that a substantial portion of the workers they interviewed had obtained their jobs by paying a recruitment agency in their country of origin. Some of these workers reported paying more than 1 year’s salary in such fees. Nearly all prime contractors in our sample reported that they generally used subcontractors or recruitment agencies to recruit foreign workers, and some reported that their knowledge about the payment of recruitment fees is limited. These subcontractors generally recruited foreign workers either from their home countries or in host countries where they may have lived and worked for an extended period of time. In other cases, subcontractors recruited workers in a third country, neither their home country nor the host country, and then transported them to the contract location. Figure 1 illustrates a variety of potential paths a foreign worker may take to be recruited for work on a U.S. government contract overseas. For foreign workers recruited in their home countries, prime contractors in our sample reported that they often used subcontractors who relied on recruitment agencies to identify workers. Since prime contractors do not have a direct relationship with these recruitment agencies, their visibility into these agencies’ practices, including whether the agencies charged workers recruitment fees, was limited. For foreign workers recruited in host countries, prime contractors in our sample reported that they typically also used subcontractors to identify workers. When foreign workers are already living in the host country, prime contractors may not know whether these workers paid recruitment fees when they first came to the host country. For example, one subcontractor that employs more than 2,500 foreign workers in Afghanistan said that it hired foreign workers from the previous contractor and did not know whether these workers had paid recruitment fees previously. The following examples illustrate how contractors use subcontractors to recruit foreign workers and their limited knowledge about recruitment fees: On a DOD services contract in Kuwait, a local subcontractor recruited, employed, and housed foreign workers supporting the prime contract. According to the subcontractor, some employees were recruited from an existing pool of foreign workers living in Kuwait. The prime contractor reported that it did not monitor the subcontractor’s recruitment practices, including if recruitment fees were paid by foreign workers in this existing pool (see scenario 1, fig. 1). On a DOD services contract in Afghanistan, the prime contractor used a subcontractor to supply labor. This subcontractor recruited workers in Dubai using several recruitment agencies. In some instances, these agencies identified workers in countries such as India and Nepal and transported them to Dubai. As noted above, many workers on this contract reported having paid for their jobs, but the prime contractor did not investigate these reports because they did not involve the prime contractor or its subcontractors (see scenario 2, fig. 1). On a DOD construction contract in Qatar, the prime contractor used a subcontractor that maintained a pool of foreign workers in Qatar who were originally identified by recruitment agencies in source countries such as Sri Lanka, Nepal, India, and Jordan. The prime contractor reported that it had no way of knowing how these workers had been initially recruited, including if they had paid any recruitment fees (see scenario 3, fig. 1). On a State services contract in Iraq, the prime contractor said that it did not use subcontractors. Instead, it transferred workers from another contract it was supporting in Djibouti and also employed several foreign workers from the contractor that had previously performed this work for DOD. Consequently, most workers had already been recruited before contract award. On a State security contract in Iraq, the prime contractor used subcontractors in Kenya and Uganda to recruit foreign workers. According to the prime contractor, subcontractors were paid a fixed fee for each worker it hired and the prime contractor did not believe workers paid a separate fee for these services. The FAR and DOD, State, and USAID guidance outline requirements for monitoring contractor labor practices, and DOD and State had processes for monitoring these practices and efforts to combat TIP on some contracts in our sample. However, we found that DOD, State, and USAID did not specifically monitor these practices on other contracts, hindering their ability to detect potential TIP abuses and implement the U.S. government’s zero tolerance policy. For example, we found that agencies did not specifically monitor for labor practices on some contracts, but rather focused on contractor-provided goods and services, such as building construction. The FAR and agency policy and guidance require the use of contract clauses that outline contractor responsibilities related to labor practices in areas such as wages and hours, housing, access to identity documents, and return travel, which have been linked to TIP abuses by the U.S. government and international organizations. All contractors in our sample reported to us that their practices reflect efforts to combat TIP. The FAR and agency policy and guidance outline requirements for DOD, State, and USAID to monitor contractor labor practices. For some contracts in our sample, DOD and State had specific processes to monitor efforts to combat TIP. On other contracts, however, neither DOD, nor State, nor USAID had such specific processes and focused their monitoring on contractor-provided goods and services. In addition, some agency contracting officials indicated that they were unaware of their monitoring responsibilities to combat TIP. Federal acquisition regulations and agency guidance provide instructions for agencies to monitor contractor labor practices. The FAR requires that agencies conduct contract quality assurance activities as necessary to determine that supplies or services conform to contract requirements, which would include requirements related to efforts to combat TIP. In addition, DOD guidance states that quality assurance surveillance plans should describe how the government will monitor a contractor’s performance regarding trafficking in persons. State’s guidance requires contracting officials to document a monitoring plan to combat TIP, obtain information on employer-furnished housing and periodically visit to ensure adequacy, and verify that the contractor does not hold employee passports or visas. Finally, USAID guidance requires contracting officials to monitor all awards to ensure compliance with TIP requirements. Specifically, the guidance states that officials should conduct appropriate site visits and employee interviews to verify that the contractor does not hold employee passports, among other things. DOD and State have developed specific processes for monitoring contractor labor practices, including efforts to combat TIP, for some contracts in our sample. Specifically, DOD developed a process for monitoring efforts to combat TIP for five of the seven DOD contracts included in our sample, all of which were awarded by the Army Contracting Command, and State had a TIP-specific process for two of the three State contracts in our sample. On the other four contracts in our sample, DOD, State, and USAID did not monitor specifically for TIP because of a focus on contractor provided goods and services, as discussed in the next section. For DOD, the Defense Contract Management Agency (DCMA) administered four contracts in Afghanistan, Kuwait, and Qatar and used a checklist to help it conduct systematic audits of contractor compliance with certain requirements related to labor practices and efforts to combat TIP. (See app. V for a sample checklist used by DCMA in Afghanistan.) DCMA’s checklist included questions about foreign worker housing, employment contracts, and policies and procedures for reporting potential TIP abuses. DCMA also interviewed a sample of foreign workers to further validate contractor compliance with requirements related to combating TIP. For example, in Kuwait, DCMA inspectors asked workers about wages, hours, overtime, identity documents, and return travel. The inspectors asked workers how the contractor paid their wages, if the contractor held their passports, and if the contractor paid for their return travel. DCMA documented contractor noncompliance with contract requirements related to labor practices and efforts to combat TIP through corrective action requests. DCMA issued such requests related to contractor labor practices—including housing, wage, and TIP issues—on three out of the four contracts it was responsible for administering in our sample. According to agency officials, none of these requests was issued in response to a serious or unacceptable contract violation. For example, on a facilities support contract in Afghanistan, the contractor was issued a corrective action request in January 2012 for not providing foreign workers an employment contract in their native language. This request was closed in December 2012 after the contractor provided DCMA a corrective action plan. DCMA officials stated that it is transitioning its contract administration responsibilities, including its process for monitoring efforts to combat TIP in Iraq and Afghanistan, to the military services. In 2009, DOD directed selected contract administration service tasks to be transferred from DCMA to the military services. Although DCMA continued to administer selected contracts until after the beginning of fiscal year 2014, DCMA officials said that it is currently developing a plan to transition contract administration responsibilities to the military services, including the Army’s contracts in Kuwait, Qatar, and Afghanistan. State monitored contract requirements related to labor practices and efforts to combat TIP for two contracts in our sample in Iraq. For a security contract, State used an inspection checklist to help it monitor contractor compliance with these requirements. This checklist included verification of wages and access to identity documents, as well as other labor practices. State officials responsible for monitoring this contract conducted monthly foreign worker housing inspections and verified that workers (1) willingly accepted their living and working conditions, (2) were paid in accordance with the terms of their employment contracts, (3) had access to their passports, (4) had access to their employment contracts and fully understood them, and (5) were free to end their employment contracts at any time, acknowledging that certain penalties may apply. On another services contract, State’s Contract Management Office—a regional office established in August 2013 to improve management and oversight of contract performance of major contracts in Iraq—used in- country interviews of foreign workers to monitor contractor labor practices, including access to identify documents and return travel. For 4 of the 11 contracts in our sample, agency officials stated that they did not specifically monitor contractor labor practices or efforts to combat TIP, as their monitoring processes were primarily focused on contractor- provided goods and services. First, on a construction contract in Qatar, DOD officials reported that they focused their monitoring on areas such as building design and quality of materials and that they did not specifically monitor for potential TIP abuses. As a result, according to an agency official, they have no ability to monitor the treatment of foreign workers once they step off the work site and thus might not be able to detect potential abuses. Second, on a DOD food services contract in Kuwait, agency officials said that they did not specifically monitor recruitment practices or have an audit program or checklist designed to combat TIP. Third, on a State security contract in Qatar, an official responsible for contract monitoring reported that monitoring efforts were focused on technical issues such as personnel qualifications and performance of duties, not on contractor labor practices or efforts to combat TIP. Finally, on a USAID construction contract in Afghanistan, an agency official stated that the agency monitored only for quality assurance and technical specifications and did not monitor specifically for TIP abuses. In addition, the contractor on this contract said it did not monitor subcontractors’ labor practices. Agency officials reported that they had not documented any concerns regarding recruitment or labor practices on any of these 4 contracts. However, without efforts to specifically monitor labor practices or efforts to combat TIP, agencies’ ability to detect such concerns is limited, and they cannot ensure that foreign workers are being treated in accordance with the U.S. government’s zero tolerance policy regarding trafficking in persons. Some DOD and State contracting officials were unaware of relevant acquisitions policy and guidance for combating TIP and did not clearly understand their monitoring responsibilities. For example, DOD officials responsible for monitoring a construction contract in Qatar expressed uncertainty about their authorities for combating TIP. As a result, these officials indicated that they conducted little monitoring of labor practices or other efforts to combat potential TIP abuses. Furthermore, a State official in Qatar responsible for contract monitoring stated that he had only recently become aware of State’s 2012 acquisition guidance on combating TIP and his monitoring activities did not specifically include efforts to combat TIP. In addition, a State official in Afghanistan with monitoring responsibilities for a services contract said that he was not aware of State’s current guidance on combating TIP in contracts. This official noted that some State officials responsible for contract monitoring may need refresher training because their initial training occurred prior to the issuance of this guidance. Finally, State’s Inspector General found, in a recent inspection of the U.S. embassy in Afghanistan, that embassy officials involved in contract administration were unaware of their responsibilities for monitoring grants and contracts for TIP violations. Agencies have developed training to help contracting officials become more aware of their monitoring responsibilities. DOD has developed new training for contracting officials that, according to a senior official, will help ensure that these officials are knowledgeable, qualified, and authorized to complete their TIP monitoring responsibilities. In October 2014, DOD made this training mandatory for all DOD personnel with job responsibilities that require daily contact with DOD contractors, foreign national personnel, or both. State officials noted that required training for officials responsible for contract monitoring includes a module on combating TIP, and State recently conducted a series of web-based seminars for acquisition personnel on how to monitor contracts for TIP abuses. The FAR and agency policy and guidance recognize that contractors should follow various labor practices. For example, for certain contracts, State requires the inclusion of a clause that prohibits contractors from denying employees access to their passports, which helps to ensure that workers have freedom of movement. In addition, both DOD and State generally require the inclusion of clauses in certain contracts that speak to a minimum of 50 square feet of space per employee in contractor- provided housing. Contractors in our sample reported to us that their practices related to wages and hours, housing, access to identity documents, and the provision of return travel reflected efforts to combat TIP. Appendix IV provides more detailed information about requirements in the FAR and agency policy and guidance related to these practices. In general, the FAR and DOD’s FAR supplement require the inclusion of clauses in certain contracts that require contractors to comply with the labor laws of the host country, which, according to officials, govern practices related to wages, hours, leave, and overtime for the contracts in our sample. State and USAID guidance directs contracting officials to discuss the observance of local labor laws with contractors after awarding the contract. According to agency and contractor officials, local labor laws in Kuwait, Qatar, Bahrain, and Iraq governed wages, hours, leave, and overtime for foreign workers on U.S. government contracts in these countries. All eight of the contractors in our sample operating in Gulf countries or Iraq reported that their practices for foreign workers’ hours, wages, leave, and overtime were generally established in accordance with the labor laws of the country in which they were performing their U.S. contract. According to agency officials and contractors, foreign workers employed on six contracts in our sample in these countries worked 8 to12 hours per day and 6 days per week. On the contracts for which overtime was permitted, contractors reported that workers often earned overtime wage rates of 125 to 150 percent of their base pay for any hours worked in excess of their regularly scheduled shifts. Furthermore, employers generally provided these workers with 1 day off per week and 3 to 4 weeks of leave per year. Both DOD and State generally require the inclusion of clauses in certain contracts that speak to a minimum of 50 square feet of space per employee in contractor-provided housing. Contractors in our sample reported to us that housing practices for contracts in our sample generally fell into the following categories, which reflect efforts to combat TIP: Foreign workers were provided housing by the U.S. government on U.S. installations. All of the DOD and State contractors in our sample that were operating in Iraq or Afghanistan reported that their foreign workers lived on-site at U.S. installations in U.S. government-provided housing. Foreign workers lived in contractor-provided housing facilities that included at least 50 square feet of living space per person, according to the contractors. For example, on a DOD services contract in Qatar, the contractor reported that its subcontractors provided housing for foreign workers. See figure 2 for examples of subcontractor-provided housing on this contract. Foreign workers were provided a housing stipend by the contractor, which workers used to secure their own housing. For instance, on the same DOD services contract in Qatar, the contractor reported that foreign workers who did not live in subcontractor-provided housing were given a housing stipend by the subcontractor and found their own housing. According to a DOD official associated with this contract, foreign workers who had families living in Qatar often choose to take the stipend to find housing that would accommodate their families. Foreign workers received no housing support from the contractor and had to secure and pay for their housing themselves. For example, on a DOD services contract in Bahrain, most foreign workers secured their own housing at their own expense, according to the contractor. DOD and State require the inclusion of clauses in certain contracts that require contractors to provide workers with access to their identity documents. DOD’s contract clause generally allows contractors to hold employee passports only for the shortest time reasonable for administrative processing, and State’s contract clause prohibits contractors from destroying, concealing, confiscating, or otherwise denying employees’ access to identity documents or passports. Contractors reported that foreign workers on the 11 contracts in our sample generally had access to their identity documents, such as passports. In general, workers either maintained personal possession of their documents or were guaranteed access to documents that they voluntarily submitted to the contractor for safekeeping. For the majority of contracts in our sample, the contractor reported that workers maintain possession of their identity documents and therefore had access to them. A DOD services contractor in Kuwait, for instance, reported that all of its foreign workers kept possession of their identity documents, including passports, work permits, driver’s licenses, and insurance cards. For other contracts in our sample, contractors offered foreign workers the option to voluntarily submit their identity documents to the contractor for safekeeping and gave them access to their documents upon request. For example, on a DOD services contract in Bahrain, the contractor reported that it would hold foreign workers’ identity documents for safekeeping if requested, but required them to sign a waiver that stated they had voluntarily submitted the documents. Contractors included in our sample did not report any instances of withholding or restricting employees’ access to identity documents; however, agency officials at two State posts we visited said that some contractors performing smaller-scale contracts for the U.S. government restricted access to identity documents. According to State’s 2014 Trafficking in Persons Report, withholding employees’ passports is a common practice in these countries. State officials in Kuwait, for instance, said that a Kuwaiti company providing janitorial services for the embassy was found to have withheld employee passports against the employees’ will. These officials said that they removed the employees’ supervisor from the contract when they learned of these allegations. State officials we spoke to in Jordan reported similar allegations of passport withholding against the embassy’s janitorial services contractor. These officials reported that they took corrective action against the contractor that partially addressed this concern. Under certain circumstances, DOD policy and State guidance require the inclusion of contract clauses in certain contracts that require contractors to provide workers with return travel upon completion of their employment contract. Certain DOD contracts in Afghanistan are to include a clause generally requiring contractors to return their employees to their point of origin or home country within 30 days after the end of the contract’s period of performance. State’s contract clause notes that contractors are generally responsible for repatriation of workers imported for contract performance. Contractors reported that they provide transportation to foreign workers to their home countries at the conclusion of their employment on all 11 contracts in our sample. For example, on a USAID construction contract in Afghanistan, the contractor provided return travel for all of the foreign workers on its contract by purchasing one-way plane tickets for the workers back to their home countries. In Bahrain, a DOD services contractor provided each foreign worker with a range of return travel options, including transportation to their home country, to a new employment location, or to any other desired location, or allowed them to stay in Bahrain and seek new employment arrangements. On 6 of the contracts in our sample, officials stated that foreign worker repatriation was explicitly addressed in the terms of the contract. For example, State officials reported that a security contract in Iraq required the contractor to provide return travel for its foreign workers and that this requirement was discussed with the contractor at the end of the contract. On 5 of these 6 contracts, officials reported that the contract also included provisions for repatriation expenses incurred by the contractor to be reimbursable by the U.S. government. For instance, State officials associated with a services contract in Iraq explained that State reimbursed the contractor for repatriation expenses because Iraqi law required all foreign workers to leave the country immediately following the conclusion of their employment, and State wanted to ensure that these workers returned to their home countries. Human trafficking victimizes hundreds of thousands of men, women, and children worldwide, including workers who move from their home countries to seek employment overseas and improve their own and their families’ well-being. The United States conducts diplomatic, defense, and development activities throughout the world, including in countries with restrictive labor practices and poor records related to trafficking in persons. The United States has an obligation to prevent entities working on its behalf from engaging in trafficking in persons, and when it uses contractors to support its activities in such countries, it bears an even greater responsibility to protect workers given the increased risk of abuse. Accordingly, it has taken several steps to eliminate trafficking in persons from government contracts and strengthened these efforts in 2007 with amendments to a contract clause required on all contracts that prohibits contractors from engaging in a variety of trafficking-related activities. Recognizing the need for further guidance on how to implement these regulations, agencies developed their own guidance and policy to augment worker protections and to clarify agency and contractor responsibilities. The President and Congress both signaled the need for further clarity in reducing the risk of TIP on government contracts by directing amendments to existing regulations in 2012. While improving the government’s ability to oversee its contractors’ labor practices is a step in the right direction, ambiguity regarding the components and amounts of permissible fees related to recruitment can limit the effectiveness of these efforts. Many contractors acknowledge that their employees may have been charged a fee for their jobs, a common practice in many countries, but they do not know if these fees are acceptable, given the existing guidance and policy. Some fees may appear reasonable, but others could be exploitative or lead to debt bondage and other conditions that contribute to trafficking. Without a more precise definition of what constitutes a recruitment fee, agencies are hindered in effectively determining which fees are allowed, and therefore developing effective practices in this area. Some agencies have established systematic processes for monitoring efforts to combat TIP on some contracts but do not monitor other contracts for TIP, focusing rather on contractor-provided goods and services. The lack of monitoring could inhibit agencies’ ability to detect potential abuses of foreign workers and reflects the limited utility of existing guidance on monitoring. Further, without consistent monitoring of contractors’ labor practices, the U.S. government is unable to send a clear signal to contractors, subcontractors, and foreign workers that the U.S. government will follow through forcefully on its zero tolerance human trafficking policy. To help ensure agencies can more fully implement their monitoring policy and guidance related to recruitment of foreign workers, the Secretaries of Defense and State and the Administrator of the U.S. Agency for International Development should each develop, as part of their agency policy and guidance, a more precise definition of recruitment fees, including permissible components and amounts. To help improve agencies’ abilities to detect potential TIP abuses and implement the U.S. government’s zero tolerance policy, the Secretaries of Defense and State and the Administrator of the U.S. Agency for International Development should each take actions to better ensure that contracting officials specifically include TIP in monitoring plans and processes, especially in areas where the risk of trafficking is high. Such actions could include developing a process for auditing efforts to combat TIP or ensuring that officials responsible for contract monitoring are aware of all relevant acquisition policy and guidance on combating TIP. We provided a draft of this product to DOD, State, and USAID for comment. These agencies provided written comments, which are reproduced in appendices VI through VIII. Regarding our recommendation for agencies to develop a more precise definition of recruitment fees as part of their policy and guidance, DOD concurred, while State and USAID neither agreed nor disagreed. DOD indicated that it would define recruitment fees during the next review of its policy related to combating TIP and incorporate this requirement in agency acquisition regulations as necessary. Such actions, if implemented effectively, should address the intent of our recommendation to DOD. State commented that it prohibits charging any recruitment fees to foreign workers, and both State and USAID noted that the proposed FAR rule on combating TIP would prohibit charging employees any recruitment fees. However, even if the final FAR rule prohibits all recruitment fees, it remains unclear whether the term “recruitment fees” includes components such as air tickets, lodging, passport and visa fees, and other fees that recruited individuals may be charged before being hired. Contracting officers and agency officials with monitoring responsibilities currently rely on policy and guidance that do not define recruitment fees, resulting in ambiguity over what constitutes such a fee. Without an explicit definition of what components constitute recruitment fees, prohibited fees may be renamed and passed on to foreign workers, increasing the risk of debt bondage and other conditions that contribute to trafficking. State also commented that, to ensure consistent treatment of recruitment fees across government, we should recommend that the Office of Management and Budget draft a FAR definition of recruitment fees. However, we believe that each agency should have the flexibility to determine, within its implementing regulations or policy, which components it considers to be included in the term “recruitment fees” to address each agency’s contracting practices. Thus we continue to believe our recommendation is valid and should be fully implemented by State and USAID. DOD and State concurred with our recommendation to better ensure that contracting officials specifically include TIP in monitoring plans and processes in areas where the risk of trafficking is high. DOD said that it would update its FAR supplement following the publication of the final FAR rule on combating TIP to improve the government’s oversight of contractor compliance with TIP. State noted that it would add a requirement to the process that contracting officer’s representatives use to certify that they are familiar with requirements for TIP monitoring and include verification of TIP monitoring in reviews of contracting operations. USAID stated that all USAID staff would be required to take training in TIP, which will be released by the end of 2014. Further, USAID said that it will develop training for contracting officer’s representatives on how to include combating TIP in monitoring plans, as well as training for contracting officers to verify that efforts to combat TIP have been appropriately included in the monitoring plans. These actions, if fully implemented, may address the intent of our recommendation, but we continue to believe that DOD, State, and USAID should ensure that contracting officials specifically include TIP in monitoring plans and processes, especially in areas where the risk of trafficking is high. In addition, USAID stated that it would be useful to obtain further guidance to help it and other agencies consistently determine what areas are considered high risk for TIP. One useful source of guidance is State’s annual Trafficking in Persons Report, which places countries into tiers based on the extent of their governments’ efforts to comply with the TVPA’s minimum standards for the elimination of human trafficking. In addition, in its written comments, State noted that it is developing a tool for procurement and contracting officers and federal contractors to assess the risk of TIP, expected to be completed in spring 2015. We also received technical comments from DOD and State, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of the Department of Defense, the Secretary of the Department of State, the Administrator of the U.S. Agency for International Development, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9601 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IX. This report responds to a requirement included in the Violence Against Women Reauthorization Act of 2013 for GAO to report on the use of foreign workers both overseas and domestically, including those employed on U.S. government contracts. Our objectives were to examine (1) policies and guidance governing the recruitment of foreign workers and the fees these workers may pay to secure work on U.S. government contracts overseas and (2) agencies’ monitoring of contractor efforts to combat TIP. This report focuses on Department of Defense (DOD), Department of State (State), and U.S. Agency for International Development (USAID) contracts with performance in countries with large percentages of migrants as compared with local nationals, and in Afghanistan and Iraq, where U.S. government contractors employ significant numbers of foreign workers. We selected a nongeneralizable sample of 11 contracts based on the contracts’ place of performance, value, type of service provided, and the number of foreign workers employed. Specifically, we obtained a list of all contracts in the Federal Procurement Data System based on the following criteria: The contract was awarded by DOD, State, or USAID. The contract’s completion date was on or after October 1, 2013. The contract’s place of performance was in a country with a large portion of migrants, according to data from the United Nations, or was in Afghanistan or Iraq. The contract’s product or service code indicated that the contract was for services, construction, or security—areas that were likely to include low-wage, low-skilled labor, because these types of jobs may be associated with a higher risk of TIP. We narrowed this list to reflect a range of agencies, countries, and services. We compared this list with data provided by DOD, State, and USAID through the Synchronized Pre-deployment and Operational Tracker (SPOT) database at the end of fiscal year 2013 to identify contracts that employed large numbers of foreign workers. On the basis of this comparison, we selected 11 contracts that represent nearly one- third of all reported foreign workers employed on contracts awarded by these three agencies as reported in SPOT. Our previous work has described several data limitations related to SPOT, but we determined that these data were sufficiently reliable for the purposes of identifying and selecting contracts employing large numbers of foreign workers for in-depth review. Table 4 provides basic information on the selected contracts. To examine the recruitment of foreign workers and the fees they might pay to secure work on U.S. government contracts overseas, we conducted structured interviews with agency officials and contractors responsible for the contracts in our sample to identify, among other things, the relevant laws, regulations in the Federal Acquisition Regulation (FAR), agency acquisition policies, and agency acquisition guidance related to contractor recruitment practices. We reviewed these laws, regulations, policies, and guidance, as well as the Trafficking Victims Protection Act and Executive Order 13627—Strengthening Protections Against Trafficking in Persons in Federal Contracts—which include additional requirements related to the recruitment of workers on U.S. government contracts. We also obtained detailed information from contractors performing the contracts in our sample about their recruitment practices through structured interviews, and, in some cases, we interviewed subcontractors who recruited and employed foreign workers regarding their practices. In addition, we reviewed DOD and State Inspector General reports to identify instances where foreign workers have reported paying recruitment fees. We conducted site visits in Afghanistan, Kuwait, and Qatar to interview DOD, State, and USAID officials, including personnel responsible for contract monitoring, contractors, subcontractors, host government officials, and nongovernmental organizations about the recruitment of foreign workers in these countries. We chose these countries based on the range of U.S. government activities in these countries, the prevalence of foreign workers, and State’s assessment of the host government’s efforts to combat TIP as indicated by State’s annual Trafficking in Persons Report tier placement. We also spoke with State and USAID officials in Jordan during preliminary fieldwork to inform our research design and methodology. On the contract employing the largest number of foreign workers in our sample, we obtained data from the contractor detailing cases of workers reporting that they had paid for their job. These data, collected from September 2012 through April 2014, included 2,534 reports of workers having paid for their jobs, and to whom, when, and where these reported payments were made. We analyzed these data to determine the number of unique individuals who had reported paying for their jobs. We then compared the data on who received these payments with a list of recruitment agencies provided by the subcontractor to determine if workers reported paying fees to recruitment agencies with which the subcontractor had agreements. We also obtained data from the contractor listing the monthly salaries in April 2014 of workers who had reported, in April 2014, having paid for their job at some point in the past. We then compared these data with the amount these workers reported having paid to determine the range and average number of months required for workers to earn the amount they reported having paid for their job. We analyzed these data to calculate the mean, median, and mode of the reported fees for April 2014. We assessed the reliability of the survey data by interviewing knowledgeable officials, including the contractor and subcontractor, and analyzing the data for outliers and duplicate records. We found these data sufficiently reliable to show that workers reported having paid for their jobs and that these payments were made to several recruitment agencies that supplied workers for this contract; and to calculate the number of months’ reported salary required to pay for the reported fee. To assess agencies’ monitoring of contractor labor practices affecting foreign workers, we obtained information through our structured interviews regarding housing, wages and hours, access to identity documents, and return travel for foreign workers on contracts in our sample. We selected these practices because they were mentioned explicitly in either or both an amendment to the Trafficking Victims Protection Act, contained in the National Defense Authorization Act for Fiscal Year 2013, and Executive Order 13627—Strengthening Protections Against Trafficking in Persons in Federal Contracts—and were included in a list of potential indicators of trafficking in persons (TIP) by the International Labour Organization (ILO). We analyzed information from DOD and State officials regarding their monitoring processes for these practices, including monitoring checklists and audit procedures for combating TIP provided by the Defense Contract Management Agency. We conducted site visits to contractor-provided housing for foreign workers on 2 contracts in our sample during our fieldwork in Kuwait and Qatar. In these countries and in Afghanistan, we interviewed DOD and State officials; contractors; and, in some cases, subcontractors about labor practices related to foreign workers on the contracts in our sample. We also met with DOD and State officials responsible for monitoring contracts to discuss their efforts to monitor contracts in our sample for potential TIP abuses. We reviewed relevant laws, regulations in the FAR, agency acquisition policies, and agency acquisition guidance related to contractor labor practices and agencies’ responsibilities for monitoring these practices. We also reviewed training requirements for acquisition personnel related to monitoring for TIP abuses and discussed existing and planned training with DOD and State officials. We also reviewed relevant studies and reports on TIP and foreign workers, including reports by DOD’s and State’s Inspectors General, the United Nations, the International Labour Organization, and nongovernmental organizations. We reviewed the methodologies used to conduct these studies and, for those that we used to corroborate our findings, we determined that they were sufficiently reliable for that purpose. We conducted this performance audit from June 2013 to November 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In the United Nations’ Protocol to Prevent, Suppress and Punish Trafficking in Persons, Especially Women and Children, Supplementing the United Nations Convention Against Transnational Organized Crime, trafficking in persons is defined as the recruitment, transportation, transfer, harboring, or receipt of persons, by means of the threat or use of force or other forms of coercion, of abduction, of fraud, of deception, of the abuse of power or of a position of vulnerability or of the giving or receiving of payments or benefits to achieve the consent of a person having control over another person, for the purpose of exploitation. In addition, the ILO has developed a list of indicators of trafficking for labor exploitation. These indicators specify several indicators of deceptive or coercive recruitment, recruitment by abuse of vulnerability, exploitation, and coercion or abuse of vulnerability at the destination. For example, these indicators include deception about travel and recruitment conditions, confiscation of documents, debt bondage, excessive working days or hours, and no respect for labor laws or signed contracts. Migrants, such as foreign workers, from many countries seek employment in the Gulf region. In 2013, the top five source countries of international migrants to Gulf countries were India, Bangladesh, Pakistan, Egypt, and the Philippines (see table 5). Growing labor forces in source countries provide an increasing supply of low-cost workers for employers in the Gulf and other host countries where, according to the International Labour Organization (ILO), demand for foreign labor is high. Economic conditions and disparities in per capita income between source and host countries encourage foreign workers to leave their countries to seek employment. In 2012, average per capita income in the six Gulf countries was nearly 25 times higher than average income per capita in the top five source countries, and some differences between individual countries were even more dramatic, according to the World Bank. For example, in 2012, annual per capita income in Qatar was more than $58,000, nearly 100 times higher than in Bangladesh, where per capita income was almost $600. Foreign workers in Gulf countries send billions of dollars in remittances to their home countries annually. For example, in 2012 the World Bank estimated that migrant workers from the top five source countries sent home almost $60 billion from the Gulf countries, including nearly $33 billion to India, nearly $10 billion to Egypt, and nearly $7 billion to Pakistan. Source countries regulate the recruitment of their nationals for overseas employment in a variety of ways. According to relevant regulatory agencies in their countries, some source countries, such as India and the Philippines, have a licensing process for recruitment agencies and require potential overseas employers to use only licensed recruiters. According to these agencies, these countries also permit recruiters to charge prospective migrant workers a fee in specified circumstances but limit the amount of this fee. For example, according to the Indian Ministry of Overseas Indian Affairs, India’s Emigration Act and Rules detail requirements for the registration of recruiters, prohibit the use of subagents, prescribe the emigration clearance process, and permit registered recruiters to charge migrant workers fees up to 20,000 rupees, currently about $325, for their services. Similarly, according to the Philippines Overseas Employment Administration, the government of the Philippines facilitates the emigration of Filipino workers employed abroad and provides standards and oversight through licensing, required contract provisions, and placement fees. Filipino embassies in host countries also provide services for its citizens in those countries, such as assistance establishing bank accounts and wiring money home and information regarding worker rights in the host country, according to the Filipino Labor Attaché in Qatar. According to the Bangladeshi Ministry of Expatriates’ Welfare and Overseas Employment, the ministry was established in 2001 to ensure the overall welfare of migrant workers and has established a process for licensing recruitment agents. Other countries, such as Egypt, do not regulate overseas recruitment, lacking a licensing process for recruiters and regulations on the amount of fees that these recruiters may charge, according to the Egyptian Ministry of Manpower and Emigration. The ILO has reported that in Gulf countries, several common and restrictive labor practices stem from the Kafala sponsorship system of foreign workers. According to the ILO, the Kafala system is a sponsorship system whereby a foreign worker is employed in a Gulf country by a specific employer that controls the worker’s residency, immigration, and employment status. Under this system, employers meet their demand for labor either by direct recruitment or through the use of recruitment agents who find foreign workers. The system generally ties workers’ residency and immigration status to the employer, which can prevent workers from changing employers and limit their freedom of movement, according to the ILO. The ILO further stated that sponsors can also prohibit workers from leaving the country and have the right to terminate workers’ employment contracts and have residency permits canceled. According to State’s Qatar 2013 Human Rights Report, international media and human rights organizations alleged numerous abuses against foreign workers, including a sponsorship system that gave employers an inordinate level of control over foreign workers. In addition to the labor practices associated with the Kafala system, the withholding of workers’ passports is an additional restrictive labor practice common in many Gulf countries. For example, State reported that in Qatar, despite laws prohibiting the withholding of foreign workers’ identity documents, employers withheld the passports of a large portion of their foreign workers in that country. According to State’s most recent Trafficking in Persons Report, the withholding of foreign workers’ passports contributes to the potential for trafficking in persons (TIP). Furthermore, the ILO has reported that employers in Gulf countries may refuse to release workers or may charge high fees for release, withhold wages as security to prevent workers from running away, and withhold personal travel documents. The ILO also found that workers in these countries may be subjected to forced overtime, limited freedom of movement, degrading living and working conditions, and physical violence and threats. Overall, the ILO estimated that there were 600,000 victims of forced labor in the Middle East at any given point in time between 2002 and 2011. The Federal Acquisition Regulation (FAR) requires all solicitations and contracts to include Clause 52.222-50, Combating Trafficking in Persons (TIP). This clause includes a prohibition on contractors engaging in severe forms of TIP, which includes the recruitment, harboring, transportation, provision or obtaining of a person for labor or services, through the use of force, fraud, or coercion for the purpose of subjection to involuntary servitude, peonage, debt bondage, or slavery. However, the clause contains no further provisions related to recruitment or other labor practices discussed in this report. The Department of Defense (DOD), Department of State (State), and the U.S. Agency for International Development (USAID) have developed policy and guidance that provides more specificity on these practices, as outlined in table 6. 1. State said that we should direct our recommendation to develop a more precise definition of recruitment fees to the Office of Management and Budget. We believe that each agency should have the flexibility to determine, within its implementing regulations, which items it considers to be included in the term “recruitment fees” to address each agency’s contracting practices. 2. State noted that the new regulations that will amend the Federal Acquisition Regulation will prohibit charging employees recruitment fees. As our report notes, even if the final FAR rule prohibits all recruitment fees, it remains unclear whether the term “recruitment fees” includes items such as air tickets, lodging, passport and visa fees, and other fees that recruited individuals may be charged before being hired. Contracting officers and agency officials with monitoring responsibilities currently rely on policy and guidance regarding recruitment fees that are ambiguous. Without an explicit definition of the components of recruitment fees, prohibited fees may be renamed and passed on to foreign workers, increasing the risk of debt bondage and other conditions that contribute to trafficking. 1. USAID stated that the final draft Federal Acquisition Regulation rule contained language that would prohibit charging contractor employees any recruitment fees and therefore USAID did not see any need to establish any policy or guidance that provides a "more precise definition of recruitment fees, including permissible components and amounts." As our report notes, even if the final FAR rule prohibits all recruitment fees, it remains unclear whether the term “recruitment fees” includes items such as air tickets, lodging, passport and visa fees, and other fees that recruited individuals may be charged before being hired. Contracting officers and agency officials with monitoring responsibilities currently rely on policy and guidance regarding recruitment fees that are ambiguous. Without an explicit definition of the components of recruitment fees, prohibited fees may be renamed and passed on to foreign workers, increasing the risk of debt bondage and other conditions that contribute to trafficking. 2. USAID said that it would be useful to obtain further guidance on the recommendation to take actions to better ensure that contracting officials specifically include TIP in monitoring plans and processes in areas where the "risk of trafficking is high." One useful source of guidance is State’s annual Trafficking in Persons Report, which places countries into tiers based on the extent of their governments’ efforts to comply with the TVPA’s minimum standards for the elimination of human trafficking. In addition, in its written comments, State noted that it is developing a tool for procurement and contracting officers and federal contractors to assess the risk of TIP, expected to be completed in the spring of 2015. In addition to the individual named above, Leslie Holen, Assistant Director; J. Robert Ball; Gergana Danailova-Trainor; Brian Egger; Justine Lazaro; Jillian Schofield; and Gwyneth Woolwine made key contributions to this report. Lynn Cothern, Etana Finkler, Grace Lui, Walter Vance, Shana Wallace, and Alyssa Weir provided technical assistance. International Labor Grants: DOL’s Use of Financial and Performance Monitoring Tools Needs to Be Strengthened. GAO-14-832. Washington, D.C.: September 24, 2014. International Labor Grants: Labor Should Improve Management of Key Award Documentation. GAO-14-493. Washington, D.C.: May 15, 2014. Human Rights: U.S. Government’s Efforts to Address Alleged Abuse of Household Workers by Foreign Diplomats with Immunity Could Be Strengthened. GAO-08-892. Washington, D.C.: July 29, 2008. Human Trafficking: Monitoring and Evaluation of International Projects Are Limited, but Experts Suggest Improvements. GAO-07-1034. Washington, D.C.: July 26, 2007. Human Trafficking: Better Data, Strategy, and Reporting Needed to Enhance U.S. Antitrafficking Efforts Abroad. GAO-06-825. Washington, D.C.: July 18, 2006.
Since the 1990s, there have been allegations of abuse of foreign workers on U.S. government contracts overseas, including allegations of TIP. In 2002, the United States adopted a zero tolerance policy on TIP regarding U.S. government employees and contractors abroad and began requiring the inclusion of this policy in all contracts in 2007. Such policy is important because the government relies on contractors that employ foreign workers in countries where, according to State, they may be vulnerable to abuse. GAO was mandated to report on the use of foreign workers. This report examines (1) policies and guidance governing the recruitment of foreign workers and the fees these workers may pay to secure work on U.S. government contracts overseas and (2) agencies' monitoring of contractor efforts to combat TIP. GAO reviewed a nongeneralizable sample of 11 contracts awarded by DOD, State, and USAID, composing nearly one-third of all reported foreign workers on contracts awarded by these agencies at the end of fiscal year 2013. GAO interviewed agency officials and contractors about labor practices and oversight activities on these contracts. Current policies and guidance governing the payment of recruitment fees by foreign workers on certain U.S. government contracts do not provide clear instructions to agencies or contractors regarding the components or amounts of permissible fees related to recruitment. GAO found that some foreign workers—individuals who are not citizens of the United States or the host country—had reported paying for their jobs. Such recruitment fees can lead to various abuses related to trafficking in persons (TIP), such as debt bondage. For example, on the contract employing the largest number of foreign workers in its sample, GAO found that more than 1,900 foreign workers reported paying fees for their jobs, including to recruitment agencies used by a subcontractor. According to the subcontractor, these fees were likely paid to a recruiter who assisted foreign workers with transportation to and housing in Dubai before they were hired to work on the contract in Afghanistan (see figure). Some Department of Defense (DOD) contracting officials GAO interviewed said that such fees may be reasonable. DOD, the Department of State (State), and the U.S. Agency for International Development (USAID) have developed policy and guidance for certain contracts addressing recruitment fees in different ways. However, these agencies do not specify what components or amounts of recruitment fees are considered permissible, limiting the ability of contracting officers and contractors to implement agency policy and guidance. GAO found that agency monitoring, called for by federal acquisition regulations and agency guidance, did not always include processes to specifically monitor contractor efforts to combat TIP. For 7 of the 11 contracts in GAO's sample, DOD and State had specific monitoring processes to combat TIP. On the 4 remaining contracts, agencies did not specifically monitor for TIP, but rather focused on contractor-provided goods and services, such as building construction. In addition, some DOD and State contracting officials said they were unaware of relevant acquisitions policy and guidance for combating TIP and did not clearly understand their monitoring responsibilities. Both DOD and State have developed additional training to help make contracting officials more aware of their monitoring responsibilities to combat TIP. Without specific efforts to monitor for TIP, agencies' ability to implement the zero tolerance policy and detect concerns about TIP is limited. GAO recommends that agencies (1) develop a more precise definition of recruitment fees and (2) ensure that contract monitoring specifically includes TIP. DOD concurred with the first recommendation, while State and USAID noted that forthcoming regulations would prohibit all recruitment fees. Agencies concurred with the second recommendation.
The public faces a high risk that critical services provided by the government and the private sector could be severely disrupted by the Year 2000 computing crisis. Financial transactions could be delayed, flights grounded, power lost, and national defense affected. Moreover, America’s infrastructures are a complex array of public and private enterprises with many interdependencies at all levels. These many interdependencies among governments and within key economic sectors could cause a single failure to have adverse repercussions. Key economic sectors that could be seriously affected if their systems are not Year 2000 compliant include information and telecommunications; banking and finance; health, safety, and emergency services; transportation; power and water; and manufacturing and small business. The information and telecommunications sector is especially important. In testimony in June, we reported that the Year 2000 readiness of the telecommunications sector is one of the most crucial concerns to our nation because telecommunications are critical to the operations of nearly every public-sector and private-sector organization. For example, the information and telecommunications sector (1) enables the electronic transfer of funds, the distribution of electrical power, and the control of gas and oil pipeline systems, (2) is essential to the service economy, manufacturing, and efficient delivery of raw materials and finished goods, and (3) is basic to responsive emergency services. Reliable telecommunications services are made possible by a complex web of highly interconnected networks supported by national and local carriers and service providers, equipment manufacturers and suppliers, and customers. In addition to the risks associated with the nation’s key economic sectors, one of the largest, and largely unknown, risks relates to the global nature of the problem. With the advent of electronic communication and international commerce, the United States and the rest of the world have become critically dependent on computers. However, there are indications of Year 2000 readiness problems in the international arena. For example, a June 1998 informal World Bank survey of foreign readiness found that only 18 of 127 countries (14 percent) had a national Year 2000 program, 28 countries (22 percent) reported working on the problem, and 16 countries (13 percent) reported only awareness of the problem. No conclusive data were received from the remaining 65 countries surveyed (51 percent). In addition, a survey of 15,000 companies in 87 countries by the Gartner Group found that the United States, Canada, the Netherlands, Belgium, Australia, and Sweden were the Year 2000 leaders, while nations including Germany, India, Japan, and Russia were 12 months or more behind the United States. The Gartner Group’s survey also found that 23 percent of all companies (80 percent of which were small companies) had not started a Year 2000 effort. Moreover, according to the Gartner Group, the “insurance, investment services and banking are industries furthest ahead. Healthcare, education, semiconductor, chemical processing, agriculture, food processing, medical and law practices, construction and government agencies are furthest behind. Telecom, power, gas and water, software, shipbuilding and transportation are laggards barely ahead of furthest-behind efforts.” The following are examples of some of the major disruptions the public and private sectors could experience if the Year 2000 problem is not corrected. Unless the Federal Aviation Administration (FAA) takes much more decisive action, there could be grounded or delayed flights, degraded safety, customer inconvenience, and increased airline costs. Aircraft and other military equipment could be grounded because the computer systems used to schedule maintenance and track supplies may not work. Further, the Department of Defense (DOD) could incur shortages of vital items needed to sustain military operations and readiness. Medical devices and scientific laboratory equipment may experience problems beginning January 1, 2000, if the computer systems, software applications, or embedded chips used in these devices contain two-digit fields for year representation. According to the Basle Committee on Banking Supervision—an international committee of banking supervisory authorities—failure to address the Year 2000 issue would cause banking institutions to experience operational problems or even bankruptcy. Recognizing the seriousness of the Year 2000 problem, on February 4, 1998, the President signed an executive order that established the President’s Council on Year 2000 Conversion led by an Assistant to the President and composed of one representative from each of the executive departments and from other federal agencies as may be determined by the Chair. The Chair of the Council was tasked with the following Year 2000 roles: (1) overseeing the activities of agencies, (2) acting as chief spokesperson in national and international forums, (3) providing policy coordination of executive branch activities with state, local, and tribal governments, and (4) promoting appropriate federal roles with respect to private-sector activities. Addressing the Year 2000 problem in time will be a tremendous challenge for the federal government. Many of the federal government’s computer systems were originally designed and developed 20 to 25 years ago, are poorly documented, and use a wide variety of computer languages, many of which are obsolete. Some applications include thousands, tens of thousands, or even millions of lines of code, each of which must be examined for date-format problems. The federal government also depends on the telecommunications infrastructure to deliver a wide range of services. For example, the route of an electronic Medicare payment may traverse several networks—those operated by the Department of Health and Human Services, the Department of the Treasury’s computer systems and networks, and the Federal Reserve’s Fedwire electronic funds transfer system. In addition, the year 2000 could cause problems for the many facilities used by the federal government that were built or renovated within the last 20 years and contain embedded computer systems to control, monitor, or assist in operations. For example, building security systems, elevators, and air conditioning and heating equipment could malfunction or cease to operate. Agencies cannot afford to neglect any of these issues. If they do, the impact of Year 2000 failures could be widespread, costly, and potentially disruptive to vital government operations worldwide. Nevertheless, overall, the government’s 24 major departments and agencies are making slow progress in fixing their systems. In May 1997, the Office of Management and Budget (OMB) reported that about 21 percent of the mission-critical systems (1,598 of 7,649) for these departments and agencies were Year 2000 compliant. A year later, in May 1998, these departments and agencies reported that 2,914 of the 7,336 mission-critical systems in their current inventories, or about 40 percent, were compliant. However, unless agency progress improves dramatically, a substantial number of mission-critical systems will not be compliant in time. In addition to slow governmentwide progress in fixing systems, our reviews of federal agency Year 2000 programs have found uneven progress. Some agencies are significantly behind schedule and are at high risk that they will not fix their systems in time. Other agencies have made progress, although risks continue and a great deal of work remains. The following are examples of the results of some of our recent reviews. Last month, we testified about FAA’s progress in implementing a series of recommendations we had made earlier this year to assist FAA in completing overdue awareness and assessment activities. These recommendations included assessing how the major FAA components and the aviation industry would be affected if Year 2000 problems were not corrected in time and completing inventories of all information systems, including data interfaces. Officials at both FAA and the Department of Transportation agreed with these recommendations, and the agency has made progress in implementing them. In our August testimony, we reported that FAA had made progress in managing its Year 2000 problem and had completed critical steps in defining which systems needed to be corrected and how to accomplish this. However, with less than 17 months to go, FAA must still correct, test, and implement many of its mission-critical systems. It is doubtful that FAA can adequately do all of this in the time remaining. Accordingly, FAA must determine how to ensure continuity of critical operations in the likely event of some systems’ failures. In October 1997, we reported that while the Social Security Administration (SSA) had made significant progress in assessing and renovating mission-critical mainframe software, certain areas of risk in its Year 2000 program remained. Accordingly, we made several recommendations to address these risk areas, which included the Year 2000 compliance of the systems used by the 54 state Disability Determination Services that help administer the disability programs. SSA agreed with these recommendations and, in July 1998, we reported that actions to implement these recommendations had either been taken or were underway.Further, we found that SSA has maintained its place as a federal leader in addressing Year 2000 issues and has made significant progress in achieving systems compliance. However, essential tasks remain. For example, many of the states’ Disability Determination Service systems still had to be renovated, tested, and deemed Year 2000 compliant. Our work has shown that much likewise remains to be done in DOD and the military services. For example, our recent report on the Navy found that while positive actions have been taken, remediation progress had been slow and the Navy was behind schedule in completing the early phases of its Year 2000 program. Further, the Navy had not been effectively overseeing and managing its Year 2000 efforts and lacked complete and reliable information on its systems and on the status and cost of its remediation activities. We have recommended improvements to DOD’s and the military services’ Year 2000 programs with which they have concurred. In addition to these examples, our reviews have shown that many agencies had not adequately acted to establish priorities, solidify data exchange agreements, or develop contingency plans. Likewise, more attention needs to be devoted to (1) ensuring that the government has a complete and accurate picture of Year 2000 progress, (2) setting governmentwide priorities, (3) ensuring that the government’s critical core business processes are adequately tested, (4) recruiting and retaining information technology personnel with the appropriate skills for Year 2000-related work, and (5) assessing the nation’s Year 2000 risks, including those posed by key economic sectors. I would like to highlight some of these vulnerabilities, and our recommendations made in April 1998 for addressing them. First, governmentwide priorities in fixing systems have not yet been established. These governmentwide priorities need to be based on such criteria as the potential for adverse health and safety effects, adverse financial effects on American citizens, detrimental effects on national security, and adverse economic consequences. Further, while individual agencies have been identifying mission-critical systems, this has not always been done on the basis of a determination of the agency’s most critical operations. If priorities are not clearly set, the government may well end up wasting limited time and resources in fixing systems that have little bearing on the most vital government operations. Other entities have recognized the need to set priorities. For example, Canada has established 48 national priorities covering areas such as national defense, food production, safety, and income security. Second, business continuity and contingency planning across the government has been inadequate. In their May 1998 quarterly reports to OMB, only four agencies reported that they had drafted contingency plans for their core business processes. Without such plans, when unpredicted failures occur, agencies will not have well-defined responses and may not have enough time to develop and test alternatives. Federal agencies depend on data provided by their business partners as well as services provided by the public infrastructure (e.g., power, water, transportation, and voice and data telecommunications). One weak link anywhere in the chain of critical dependencies can cause major disruptions to business operations. Given these interdependencies, it is imperative that contingency plans be developed for all critical core business processes and supporting systems, regardless of whether these systems are owned by the agency. Our recently issued guidance aims to help agencies ensure such continuity of operations through contingency planning. Third, OMB’s assessment of the current status of federal Year 2000 progress is predominantly based on agency reports that have not been consistently reviewed or verified. Without independent reviews, OMB and the President’s Council on Year 2000 Conversion have little assurance that they are receiving accurate information. In fact, we have found cases in which agencies’ systems compliance status as reported to OMB has been inaccurate. For example, the DOD Inspector General estimated that almost three quarters of DOD’s mission-critical systems reported as compliant in November 1997 had not been certified as compliant by DOD components.In May 1998, the Department of Agriculture (USDA) reported 15 systems as compliant, even though these were replacement systems that were still under development or were planned for development. (The department removed these systems from compliant status in its August 1998 quarterly report.) Fourth, end-to-end testing responsibilities have not yet been defined. To ensure that their mission-critical systems can reliably exchange data with other systems and that they are protected from errors that can be introduced by external systems, agencies must perform end-to-end testing for their critical core business processes. The purpose of end-to-end testing is to verify that a defined set of interrelated systems, which collectively support an organizational core business area or function, will work as intended in an operational environment. In the case of the year 2000, many systems in the end-to-end chain will have been modified or replaced. As a result, the scope and complexity of testing—and its importance—is dramatically increased, as is the difficulty of isolating, identifying, and correcting problems. Consequently, agencies must work early and continually with their data exchange partners to plan and execute effective end-to-end tests. So far, lead agencies have not been designated to take responsibility for ensuring that end-to-end testing of processes and supporting systems is performed across boundaries, and that independent verification and validation of such testing is ensured. We have set forth a structured approach to testing in our recently released exposure draft. In our April 1998 report on governmentwide Year 2000 progress, we made a number of recommendations to the Chair of the President’s Council on Year 2000 Conversion aimed at addressing these problems. These included establishing governmentwide priorities and ensuring that agencies set developing a comprehensive picture of the nation’s Year 2000 readiness, requiring agencies to develop contingency plans for all critical core requiring agencies to develop an independent verification strategy to involve inspectors general or other independent organizations in reviewing Year 2000 progress, and designating lead agencies responsible for ensuring that end-to-end operational testing of processes and supporting systems is performed. We are encouraged by actions the Council is taking in response to some of our recommendations. For example, OMB and the Chief Information Officers Council adopted our guide providing information on business continuity and contingency planning issues common to most large enterprises as a model for federal agencies. However, as we recently testified before this Subcommittee, some actions have not been fully addressed—principally with respect to setting national priorities and end-to-end testing. State and local governments also face a major risk of Year 2000-induced failures to the many vital services—such as benefits payments, transportation, and public safety—that they provide. For example, food stamps and other types of payments may not be made or could be made for incorrect amounts; date-dependent signal timing patterns could be incorrectly implemented at highway intersections, and safety severely compromised, if traffic signal systems run by state and local governments do not process four-digit years correctly; and criminal records (i.e., prisoner release or parole eligibility determinations) may be adversely affected by the Year 2000 problem. Recent surveys of state Year 2000 efforts have indicated that much remains to be completed. For example, a July 1998 survey of state Year 2000 readiness conducted by the National Association of State Information Resource Executives, Inc., found that only about one-third of the states reported that 50 percent or more of their critical systems had been completely assessed, remediated, and tested. In a June 1998 survey conducted by USDA’s Food and Nutrition Service, only 3 and 14 states, respectively, reported that the software, hardware, and telecommunications that support the Food Stamp Program, and the Women, Infants, and Children program, were Year 2000 compliant. Although all but one of the states reported that they would be Year 2000 compliant by January 1, 2000, many of the states reported that their systems are not due to be compliant until after March 1999 (the federal government’s Year 2000 implementation goal). Indeed, 4 and 5 states, respectively, reported that the software, hardware, and telecommunications supporting the Food Stamp Program, and the Women, Infants, and Children program would not be Year 2000 compliant until the last quarter of calendar year 1999, which puts them at high risk of failure due to the need for extensive testing. State audit organizations have identified other significant Year 2000 concerns. For example, (1) Illinois’ Office of the Auditor General reported that significant future efforts were needed to ensure that the year 2000 would not adversely affect state government operations, (2) Vermont’s Office of Auditor of Accounts reported that the state faces the risk that critical portions of its Year 2000 compliance efforts could fail, (3) Texas’ Office of the State Auditor reported that many state entities had not finished their embedded systems inventories and, therefore, it is not likely that they will complete their embedded systems repairs before the year 2000, and (4) Florida’s Auditor General has issued several reports detailing the need for additional Year 2000 planning at various district school boards and community colleges. State audit offices have also made recommendations, including the need for increased oversight, Year 2000 project plans, contingency plans, and personnel recruitment and retention strategies. In the course of these field hearings, states and municipalities have testified about Year 2000 practices that could be adopted by others. For example: New York established a “top 40” list of priority systems having a direct impact on public health, safety, and welfare, such as systems that support child welfare, state aid to schools, criminal history, inmate population management, and tax processing. According to New York, “the Top 40 systems must be compliant, no matter what.” The city of Lubbock, Texas, is planning a Year 2000 “drill” this month. To prepare for the drill, Lubbock is developing scenarios of possible Year 2000-induced failures, as well as more normal problems (such as inclement weather) that could occur at the change of century. Louisiana established a $5 million Year 2000 funding pool to assist agencies experiencing emergency circumstances in mission-critical applications and that are unable to correct the problems with existing resources. Regarding Ohio, our review of the state’s Year 2000 Internet World Wide Web site found that it had developed a detailed Year 2000 certification checklist. The checklist included items such as the first potential failure date, date fields, interfaces, and testing. However, according to Ohio’s Year 2000 Administrator, implementation of this checklist is voluntary. According to Ohio’s Year 2000 Internet World Wide Web site, while many of the state’s agencies estimated that they would complete their Year 2000 remediation in late 1998 or early 1999, several critical agencies are not due to be compliant until mid-1999. For example, Ohio’s (1) Department of Education reported it was 35 percent complete as of June 1998 and planned to be complete in July 1999, (2) Department of Health reported that it was 70 percent complete as of August 1998 and planned to be complete in July 1999, and (3) Department of Transportation reported that it was 70 percent complete as of April 1998 and planned to be complete in June 1999. To fully address the Year 2000 risks that states and the federal government face, data exchanges must also be confronted—a monumental issue. As computers play an ever-increasing role in our society, exchanging data electronically has become a common method of transferring information among federal, state, and local governments. For example, SSA exchanges data files with the states to determine the eligibility of disabled persons for disability benefits. In another example, the National Highway Traffic Safety Administration provides states with information needed for driver registrations. As computer systems are converted to process Year 2000 dates, the associated data exchanges must also be made Year 2000 compliant. If the data exchanges are not Year 2000 compliant, data will not be exchanged or invalid data could cause the receiving computer systems to malfunction or produce inaccurate computations. Our recent report on actions that have been taken to address Year 2000 issues for electronic data exchanges revealed that federal agencies and the states use thousands of such exchanges to communicate with each other and other entities. For example, federal agencies reported that their mission-critical systems have almost 500,000 data exchanges with other federal agencies, states, local governments, and the private sector. To successfully remediate their data exchanges, federal agencies and the states must (1) assess information systems to identify data exchanges that are not Year 2000 compliant, (2) contact exchange partners and reach agreement on the date format to be used in the exchange, (3) determine if data bridges and filters are needed and, if so, reach agreement on their development, (4) develop and test such bridges and filters, (5) test and implement new exchange formats, and (6) develop contingency plans and procedures for data exchanges. At the time of our review, much work remained to ensure that federal and state data exchanges will be Year 2000 compliant. About half of the federal agencies reported during the first quarter of 1998 that they had not yet finished assessing their data exchanges. Moreover, almost half of the federal agencies reported that they had reached agreements on 10 percent or fewer of their exchanges, few federal agencies reported having installed bridges or filters, and only 38 percent of the agencies reported that they had developed contingency plans for data exchanges. Further, the status of the data exchange efforts of 15 of the 39 state-level organizations that responded to our survey was not discernable because they were not able to provide us with information on their total number of exchanges and the number assessed. Of the 24 state-level organizations that provided actual or estimated data, they reported, on average, that 47 percent of the exchanges had not been assessed. In addition, similar to the federal agencies, state-level organizations reported having made limited progress in reaching agreements with exchange partners, installing bridges and filters, and developing contingency plans. However, we could draw only limited conclusions on the status of the states’ actions because data were provided on only a small portion of states’ data exchanges. To strengthen efforts to address data exchanges, we made several recommendations to OMB. In response, OMB agreed that it needed to increase its efforts in this area. For example, OMB noted that federal agencies had provided the General Services Administration with a list of their data exchanges with the states. In addition, as a result of an agreement reached at an April 1998 federal/state data exchange meeting,the states were supposed to verify the accuracy of these initial lists by June 1, 1998. OMB also noted that the General Services Administration is planning to collect and post information on its Internet World Wide Web site on the progress of federal agencies and states in implementing Year 2000 compliant data exchanges. In summary, federal, state, and local efforts must increase substantially to ensure that major service disruptions do not occur. Greater leadership and partnerships are essential if government programs are to meet the needs of the public at the turn of the century. Mr. Chairman, this concludes my statement. I would be happy to respond to any questions that you or other members of the Subcommittee may have at this time. FAA Systems: Serious Challenges Remain in Resolving Year 2000 and Computer Security Problems (GAO/T-AIMD-98-251, August 6, 1998). Year 2000 Computing Crisis: Business Continuity and Contingency Planning (GAO/AIMD-10.1.19, August 1998). Internal Revenue Service: Impact of the IRS Restructuring and Reform Act on Year 2000 Efforts (GAO/GGD-98-158R, August 4, 1998). Social Security Administration: Subcommittee Questions Concerning Information Technology Challenges Facing the Commissioner (GAO/AIMD-98-235R, July 10, 1998). Year 2000 Computing Crisis: Actions Needed on Electronic Data Exchanges (GAO/AIMD-98-124, July 1, 1998). Defense Computers: Year 2000 Computer Problems Put Navy Operations at Risk (GAO/AIMD-98-150, June 30, 1998). Year 2000 Computing Crisis: A Testing Guide (GAO/AIMD-10.1.21, Exposure Draft, June 1998). Year 2000 Computing Crisis: Testing and Other Challenges Confronting Federal Agencies (GAO/T-AIMD-98-218, June 22, 1998). Year 2000 Computing Crisis: Telecommunications Readiness Critical, Yet Overall Status Largely Unknown (GAO/T-AIMD-98-212, June 16, 1998). GAO Views on Year 2000 Testing Metrics (GAO/AIMD-98-217R, June 16, 1998). IRS’ Year 2000 Efforts: Business Continuity Planning Needed for Potential Year 2000 System Failures (GAO/GGD-98-138, June 15, 1998). Year 2000 Computing Crisis: Actions Must Be Taken Now to Address Slow Pace of Federal Progress (GAO/T-AIMD-98-205, June 10, 1998). Defense Computers: Army Needs to Greatly Strengthen Its Year 2000 Program (GAO/AIMD-98-53, May 29, 1998). Year 2000 Computing Crisis: USDA Faces Tremendous Challenges in Ensuring That Vital Public Services Are Not Disrupted (GAO/T-AIMD-98-167, May 14, 1998). Securities Pricing: Actions Needed for Conversion to Decimals (GAO/T-GGD-98-121, May 8, 1998). Year 2000 Computing Crisis: Continuing Risks of Disruption to Social Security, Medicare, and Treasury Programs (GAO/T-AIMD-98-161, May 7, 1998). IRS’ Year 2000 Efforts: Status and Risks (GAO/T-GGD-98-123, May 7, 1998). Air Traffic Control: FAA Plans to Replace Its Host Computer System Because Future Availability Cannot Be Assured (GAO/AIMD-98-138R, May 1, 1998). Year 2000 Computing Crisis: Potential for Widespread Disruption Calls for Strong Leadership and Partnerships (GAO/AIMD-98-85, April 30, 1998). Defense Computers: Year 2000 Computer Problems Threaten DOD Operations (GAO/AIMD-98-72, April 30, 1998). Department of the Interior: Year 2000 Computing Crisis Presents Risk of Disruption to Key Operations (GAO/T-AIMD-98-149, April 22, 1998). Tax Administration: IRS’ Fiscal Year 1999 Budget Request and Fiscal Year 1998 Filing Season (GAO/T-GGD/AIMD-98-114, March 31, 1998). Year 2000 Computing Crisis: Strong Leadership Needed to Avoid Disruption of Essential Services (GAO/T-AIMD-98-117, March 24, 1998). Year 2000 Computing Crisis: Federal Regulatory Efforts to Ensure Financial Institution Systems Are Year 2000 Compliant (GAO/T-AIMD-98-116, March 24, 1998). Year 2000 Computing Crisis: Office of Thrift Supervision’s Efforts to Ensure Thrift Systems Are Year 2000 Compliant (GAO/T-AIMD-98-102, March 18, 1998). Year 2000 Computing Crisis: Strong Leadership and Effective Public/Private Cooperation Needed to Avoid Major Disruptions (GAO/T-AIMD-98-101, March 18, 1998). Post-Hearing Questions on the Federal Deposit Insurance Corporation’s Year 2000 (Y2K) Preparedness (AIMD-98-108R, March 18, 1998). SEC Year 2000 Report: Future Reports Could Provide More Detailed Information (GAO/GGD/AIMD-98-51, March 6, 1998). Year 2000 Readiness: NRC’s Proposed Approach Regarding Nuclear Powerplants (GAO/AIMD-98-90R, March 6, 1998). Year 2000 Computing Crisis: Federal Deposit Insurance Corporation’s Efforts to Ensure Bank Systems Are Year 2000 Compliant (GAO/T-AIMD-98-73, February 10, 1998). Year 2000 Computing Crisis: FAA Must Act Quickly to Prevent Systems Failures (GAO/T-AIMD-98-63, February 4, 1998). FAA Computer Systems: Limited Progress on Year 2000 Issue Increases Risk Dramatically (GAO/AIMD-98-45, January 30, 1998). Defense Computers: Air Force Needs to Strengthen Year 2000 Oversight (GAO/AIMD-98-35, January 16, 1998). Year 2000 Computing Crisis: Actions Needed to Address Credit Union Systems’ Year 2000 Problem (GAO/AIMD-98-48, January 7, 1998). Veterans Health Administration Facility Systems: Some Progress Made In Ensuring Year 2000 Compliance, But Challenges Remain (GAO/AIMD-98-31R, November 7, 1997). Year 2000 Computing Crisis: National Credit Union Administration’s Efforts to Ensure Credit Union Systems Are Year 2000 Compliant (GAO/T-AIMD-98-20, October 22, 1997). Social Security Administration: Significant Progress Made in Year 2000 Effort, But Key Risks Remain (GAO/AIMD-98-6, October 22, 1997). Defense Computers: Technical Support Is Key to Naval Supply Year 2000 Success (GAO/AIMD-98-7R, October 21, 1997). Defense Computers: LSSC Needs to Confront Significant Year 2000 Issues (GAO/AIMD-97-149, September 26, 1997). Veterans Affairs Computer Systems: Action Underway Yet Much Work Remains To Resolve Year 2000 Crisis (GAO/T-AIMD-97-174, September 25, 1997). Year 2000 Computing Crisis: Success Depends Upon Strong Management and Structured Approach (GAO/T-AIMD-97-173, September 25, 1997). Year 2000 Computing Crisis: An Assessment Guide (GAO/AIMD-10.1.14, September 1997). Defense Computers: SSG Needs to Sustain Year 2000 Progress (GAO/AIMD-97-120R, August 19, 1997). Defense Computers: Improvements to DOD Systems Inventory Needed for Year 2000 Effort (GAO/AIMD-97-112, August 13, 1997). Defense Computers: Issues Confronting DLA in Addressing Year 2000 Problems (GAO/AIMD-97-106, August 12, 1997). Defense Computers: DFAS Faces Challenges in Solving the Year 2000 Problem (GAO/AIMD-97-117, August 11, 1997). Year 2000 Computing Crisis: Time Is Running Out for Federal Agencies to Prepare for the New Millennium (GAO/T-AIMD-97-129, July 10, 1997). Veterans Benefits Computer Systems: Uninterrupted Delivery of Benefits Depends on Timely Correction of Year-2000 Problems (GAO/T-AIMD-97-114, June 26, 1997). Veterans Benefits Computer Systems: Risks of VBA’s Year-2000 Efforts (GAO/AIMD-97-79, May 30, 1997). Medicare Transaction System: Success Depends Upon Correcting Critical Managerial and Technical Weaknesses (GAO/AIMD-97-78, May 16, 1997). Medicare Transaction System: Serious Managerial and Technical Weaknesses Threaten Modernization (GAO/T-AIMD-97-91, May 16, 1997). Year 2000 Computing Crisis: Risk of Serious Disruption to Essential Government Functions Calls for Agency Action Now (GAO/T-AIMD-97-52, February 27, 1997). Year 2000 Computing Crisis: Strong Leadership Today Needed To Prevent Future Disruption of Government Services (GAO/T-AIMD-97-51, February 24, 1997). High-Risk Series: Information Management and Technology (GAO/HR-97-9, February 1997). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the year 2000 computer system risks facing the nation, focusing on: (1) GAO's major concerns with the federal government's progress in correcting its systems; (2) state and local government year 2000 issues; and (3) critical year 2000 data exchange issues. GAO noted that: (1) the public faces a high risk that critical services provided by the government and the private sector could be severely disrupted by the year 2000 computing crisis; (2) the year 2000 could cause problems for the many facilities used by the federal government that were built or renovated within the last 20 years and contain embedded computer systems to control, monitor, or assist in operations; (3) overall, the government's 24 major departments and agencies are making slow progress in fixing their systems; (4) in May 1997, the Office of Management and Budget (OMB) reported that about 21 percent of the mission-critical systems for these departments and agencies were year 2000 compliant; (5) in May 1998, these departments reported that 40 percent of the mission-critical systems were year 2000 compliant; (6) unless progress improves dramatically, a substantial number of mission-critical systems will not be compliant in time; (7) in addition to slow governmentwide progress in fixing systems, GAO's reviews of federal agency year 2000 programs have found uneven progress; (8) some agencies are significantly behind schedule and are at high risk that they will not fix their systems in time; (9) other agencies have made progress, although risks continue and a great deal of work remains; (10) governmentwide priorities in fixing systems have not yet been established; (11) these governmentwide priorities need to be based on such criteria as the potential for adverse health and safety effects, adverse financial effects on American citizens, detrimental effects on national security, and adverse economic consequences; (12) business continuity and contingency planning across the government has been inadequate; (13) in their May 1998 quarterly reports to OMB, only four agencies reported that they had drafted contingency plans for their core business processes; (14) OMB's assessment of the status of federal year 2000 progress is predominantly based on agency reports that have not been consistently reviewed or verified; (15) GAO found cases in which agencies' systems' compliance status as reported to OMB had been inaccurate; (16) end-to-end testing responsibilities have not yet been defined; (17) state and local governments also face a major risk of year 2000-induced failures to the many vital services that they provide; (18) recent surveys of state year 2000 efforts have indicated that much remains to be completed; and (19) at the time of GAO's review, much work remained to ensure that federal and state data exchanges will be year 2000 compliant.
Bacteria exist almost everywhere—in water, soil, plants, animals, and humans. Bacteria can transfer from person to person, among animals and people, from animals to animals, and through water and the food chain. Most bacteria do little or no harm, and some are even useful to humans. However, others are capable of causing disease. Moreover, the same bacteria can have different effects on different parts of the host body. For example, S. aureus on the skin can be harmless, but when they enter the bloodstream through a wound they can cause disease. An antibacterial is anything that can kill or inhibit the growth of bacteria, such as high heat or radiation or a chemical. Antibacterial chemicals can be grouped into three broad categories: antibacterial drugs, antiseptics, and disinfectants. Antibacterial drugs are used in relatively low concentrations in or upon the bodies of organisms to prevent or treat specific bacterial diseases without harming the organism. They are also used in agriculture to enhance the growth of food animals. Unlike antibacterial drugs, antiseptics and disinfectants are usually nonspecific with respect to their targets—they kill or inhibit a variety of microbes. Antiseptics are used topically in or on living tissue, and disinfectants are used on objects or in water. (For more information on resistant bacteria, see app. II; for more on antibacterial use, see app. III.) Antibacterial resistance describes a feature of some bacteria that enables them to avoid the effects of antibacterial agents. Bacteria may possess characteristics that allow them to survive a sudden change in climate, the effects of ultraviolet light from the sun, or the presence of an antibacterial chemical in their environment. Some bacteria are naturally resistant. Other bacteria acquire resistance to antibacterials to which they once were susceptible. The development of resistance to an antibacterial is complex. Susceptible bacteria can become resistant by acquiring resistance genes from other bacteria or through mutations in their own genetic material (DNA). Once acquired, the resistance characteristic is passed on to future generations and sometimes to other bacterial species. Antibacterials have been shown to promote antibacterial resistance in at least three ways: through (1) encouraging the exchange of resistant genes between bacteria, (2) favoring the survival of the resistant bacteria in a mixed population of resistant and susceptible bacteria, and (3) making people and animals more vulnerable to resistant infection. Although the contribution of antibacterials in promoting resistance has most often been documented for antibacterial drugs, there are also reports of disinfectant use contributing to resistance and concerns about the potential for antiseptics to promote resistance. For example, in the case of disinfectants, researchers have found that chlorinated river water contains more bacteria that are resistant to streptomycin than does nonchlorinated river water. Also, it has been shown that some kinds of Escherichia coli (E. coli) resist triclosan—an antiseptic used in a variety of products, including soaps and toothpaste. This raises the possibility that antiseptic use could contribute to the emergence of resistant bacteria. While antibacterials are a major factor in the development of resistance, many other factors are also involved—including the nature of the specific bacteria and antibacterial involved, the way the antibacterial is used, characteristics of the host, and environmental factors. Therefore, the use of antibacterials does not always lead to resistance. from Enterococcus faecalis to Listeria moncytogenes in the Digestive Tracts of Gnotobiotic Mice,” Antimicrobial Agents and Chemotherapy, Vol. 35 (1991), pp. 185-87; (2) V. L. Yu and others, “Patient Factors Contributing to the Emergence of Gentamicin-Resistant Serratia marcescens,” The American Journal of Medicine, Vol. 66 (1979), pp. 468-72; and (3) R. P. Mouton and others, “Correlations Between Consumption of Antibiotics and Methicillin Resistance in Coagulase Negative Staphylococci,” Journal of Antimicrobial Chemotherapy, Vol. 26 (1990), pp. 573-83. Although we found many sources of information about the public health burden in the United States attributable to resistant bacteria, each source provides data on only part of the burden. Specifically, we found information about resistant diseases that result in hospitalization or are acquired in the hospital and information about two specific diseases—TB and gonorrhea. Moreover, no systematic information is available about deaths from diseases caused by resistant bacteria or about the costs of treating resistant disease. Consequently, the overall extent of disease, death, and treatment costs resulting from resistant bacteria is unknown. The primary source of information on cases of disease caused by resistant bacteria is the National Hospital Discharge Survey (NHDS)—conducted annually by CDC’s National Center for Health Statistics (NCHS). It estimates drug-resistant infections among hospitalized patients, including both patients with a resistant infection that caused them to be hospitalized and patients who acquired a resistant infection while in the hospital for another reason. According to this survey, in 1997, hospitals discharged 43,000 patients who had been diagnosed with and treated for infections from drug-resistant bacteria. (See table 1.) These numbers, however, should be interpreted cautiously. The survey’s diagnostic codes for designating infections with drug-resistant bacteria are, in most cases, not required for reimbursement, and they went into effect only in October 1993—though the survey has been conducted since 1965. Therefore, estimating the number of cases of infections with drug-resistant bacteria based on these codes likely results in an underestimate. In addition, increases in the number of discharged patients who had been treated for infections from drug-resistant bacteria may reflect an increase in the use of the new codes and not an actual increase in the incidence of resistant infections. Data on five predominant bacterial infections acquired in hospitals from CDC’s Hospital Infections Program further suggest that the estimates derived from NHDS may be too low. Since the discharge survey is not limited to specific infections and includes diseases acquired outside the hospital, it would be expected that estimates derived from the survey would be greater. However, estimates from the Hospital Infections Program indicate that the number of resistant infections acquired in hospitals is many times greater. (See table 2.) These estimates should also be interpreted cautiously. CDC estimated the number of cases for each type of resistant bacteria by extrapolating from data on the 276 hospitals participating in CDC’s National Nosocomial Infections Surveillance (NNIS) system to all hospitals in the United States. NNIS hospitals, however, are not representative of all hospitals; they are disproportionately large, urban, and affiliated with medical schools, and therefore likely to have more severely ill patients. Moreover, unlike NHDS, which surveys discharge codes that denote actual infections, the NNIS hospitals test bacterial samples in laboratories and thus may be detecting resistant bacteria that did not necessarily result in a patient treated for infection. Consequently, these CDC extrapolations probably overestimate the number of cases of these types of resistant bacterial disease. Another source of information on cases of disease caused by resistant bacteria is data developed through surveillance of infectious diseases. However, nationwide data on such diseases are currently limited to TB and gonorrhea. CDC’s Division of Tuberculosis Elimination collects reports of all verified TB cases from states. TB is an infectious disease, most commonly of the lungs, caused by Mycobacterium tuberculosis. In response to increased incidence of TB in the late 1980s and early 1990s, CDC, in conjunction with state and local health departments, expanded national surveillance to include tests for resistance for all confirmed cases reported in 1993 and later. In 1997, the most recent year for which data have been published, tests were performed on 88.5 percent of confirmed TB cases reported in the United States. Of these, 12.6 percent were resistant to at least one antituberculosis drug. Although the number of cases of TB has declined, the proportion of cases that are resistant has remained relatively stable (see fig. 1). Through its Division of Sexually Transmitted Disease Prevention, CDC also conducts nationwide surveillance of gonorrhea, which is caused by the bacterium Neisseria gonorrhoea. CDC supplements nationwide surveillance of gonorrhea infections with a Gonococcal Isolate Surveillance Project (GISP), a network consisting of clinics in 27 cities. In 1997, 33.4 percent of the gonococcal samples collected by GISP were resistant to penicillin, or tetracycline, or both. Figure 2 shows that the proportion of gonorrhea resistant to these drugs has remained relatively stable since 1991. Nationwide data on other diseases that can be caused by resistant bacteria are not yet available, but efforts are under way to monitor invasive diseases caused by Streptococcus pneumoniae (S. pneumoniae), including meningitis and bacteremia. This bacterium was once routinely treatable with penicillin; however, since the mid-1980s, penicillin resistance has emerged, and some infections are susceptible only to vancomycin. In 1995, resistant S. pneumoniae was designated as a nationally reportable disease, and by 1998, 37 states were conducting public health surveillance on this bacterium. We found no efforts yet under way to collect systematic information on bacterial resistance in other diseases that have exhibited resistance to the antibacterial drugs usually used to treat them. Many common diseases caused by bacteria that have exhibited resistance—such as otitis media, gastric ulcers, cystitis, and strep throat—are typically acquired outside the hospital. In addition, they typically do not result in hospitalization, are often treated without laboratory identification of the underlying cause, and are not notifiable. Thus, they are not reflected in existing data sources. The number of deaths caused by resistant bacteria cannot be determined because the standard source of data on deaths—vital statistics compiled from death certificates—does not distinguish resistant infections from susceptible ones. A number of studies provide some information about deaths, but they are generally small studies of outbreaks in a single hospital or community. These studies suggest that infections from resistant bacteria are more likely to be fatal than those from nonresistant bacteria. One recent study on deaths in a larger population over a relatively longer period of time—all hospitalized patients in 13 New York City metropolitan area counties in 1995—found that patients with infections from methicillin-resistant Staphylococcus aureus (MRSA) were more than 2.5 times more likely to die than patients with infections from methicillin-sensitive Staphylococcus aureus (MSSA). (See table 3.) Because the number of cases of resistant disease is not known and the average treatment cost of cases is not available, we are unable to estimate the overall cost of treating drug-resistant bacterial disease. Although information about the cost of treating infections caused by resistant bacteria is limited, it suggests that resistant infections are generally more costly to treat than those caused by susceptible bacteria. For example, in the study of the impact of S. aureus infections in metropolitan New York City hospitals, direct medical costs—consisting of hospital charges, professional fees during hospitalization, and medical services after discharge—were 8 percent higher for a patient with MRSA than for a patient with MSSA. The higher cost of treating MRSA infections reflects the higher cost of vancomycin use, longer hospital stay, and patient isolation procedures. Similarly, a study of the cost of treating TB, based on a survey of five programs—in Alabama; Illinois; New Jersey; Texas; and Los Angeles, California—showed that outpatient therapy costs for multidrug-resistant TB were more than 3 times as great as for susceptible TB. (See table 4.) Appendix IV describes other studies of the cost of treating resistant disease. Existing data on resistant bacteria, which can cause infections, and antibacterial use, which can promote the development of resistance, provide clues for understanding how the future U.S. public health burden could develop. Because resistant bacteria from anywhere in the world could result in an infection in the United States, the development of resistance globally must also be considered. The data available suggest that antibacterial resistance is increasing worldwide and that antibacterial agents are used extensively. Consequently, the U.S. public health burden could increase. Without routine testing and systematic data collection globally, the prevalence of resistant bacteria worldwide cannot be determined. Data from laboratories that monitor for resistant bacteria, however, show that resistance in human and animal bacteria is increasing in four ways. Bacteria known to be susceptible are becoming resistant. Some bacteria that were once susceptible to certain antibacterials are now resistant to them. For example, Yersinia pestis, which causes plague, was universally susceptible to streptomycin, chloramphenicol, and tetracycline. Extensive testing of samples of specific kinds of Yersinia pestis collected between 1926 and 1995 in Madagascar had not detected any multidrug resistance. In 1995, however, a multidrug-resistant sample was isolated from a 16-year old boy in Madagascar. The proportion of resistant bacteria is increasing in some populations of bacteria. Although existing surveillance systems predominantly monitor the development of resistance in bacteria from sick people in specific countries, and while different geographical areas may exhibit different antibacterial resistance patterns, data overall indicate that a greater proportion of samples being tested are positive for resistance. For example, according to data from CDC, S. pneumoniae is becoming increasingly resistant in the United States—that is, an increasing percentage of S. pneumoniae samples that are tested in CDC laboratories are resistant to penicillin. (See fig. 3.) Studies also show that resistance is increasing in other countries. For example, a DOD-funded study on diarrhea-causing bacteria isolated from indigenous persons in Thailand over 15 years shows that ciprofloxacin resistance among Campylobacter samples increased from 0 percent before 1991 to 84 percent in 1995. In Iceland, the frequency of penicillin-resistant samples of S. pneumoniae rose from 2.3 percent in 1989 to 17 percent in 1992, after detecting penicillin-resistant S. pneumoniae for the first time in 1988. In the Netherlands, metronidazole-resistant Helicobacter pylori in several Dutch hospitals increased from 7 percent in 1993 to 32 percent in 1996. In addition to increases in resistance in bacteria that affect people, resistance among bacteria in animals has also been increasing. In Finland, two surveys—carried out in 1988 and 1995—studied the prevalence of inflamed udders in cows and the antibacterial susceptibility of the bacteria that caused them. The investigators found that the proportion of certain types of S. aureus resistant to at least one antibacterial drug increased from 37 percent in 1988 to almost 64 percent in 1995. In the Netherlands, a study of Campylobacter isolated from poultry products between 1982 and 1989 showed that resistance to quinolones increased from 0 percent to 14 percent. Bacteria are becoming resistant to additional antibacterials. Some bacteria that were considered resistant to a particular antibacterial drug have developed resistance to additional antibacterials. For example, in 1989, a multiresistant clone of MRSA was detected in Spain and a multiresistant clone of penicillin-resistant S. pneumoniae was detected in Iceland.Similarly, a few cases of MRSA have exhibited an intermediate level of resistance to vancomycin, in addition to their resistance to many other antibacterials. Resistant bacteria are spreading. Over the past decade, a number of resistant bacteria are also believed to have spread around the world. Bacteria can be traced by their DNA patterns. Evidence that the DNA patterns of resistant bacteria from geographically diverse places are the same or very similar combined with evidence that resistance in these bacteria have been prevalent in one place and not in the other allows researchers to conclude that a bacterial clone has spread. With international travel and trade and the continuous exchange of bacteria among people, animals, and agricultural hosts and environments, resistant bacteria can spread from one country to another. For example, in 1989, a multidrug-resistant MRSA, known as the Iberian clone, was identified during an outbreak in Spain. This clone has spread to hospitals in Portugal, Italy, Scotland, Germany, and Belgium. In 1998, resistant Shigella on parsley entered the United States from Mexico, causing two outbreaks of shigellosis in Minnesota. Antibacterials are used around the world for a number of purposes in various settings, and their use can vary from country to country. Antibacterial drugs are used in both people and animals. Antiseptics and disinfectants are used in hospitals, homes, schools, restaurants, farms, food processing plants, water treatment facilities, and other places. While measures of total antibacterial use in most countries are not available, some data have been published on the total amount of antibacterials produced or sold in the United States. Figure 4 shows the total weight of antibacterial drugs (chemicals, not finished products) produced in the United States from 1950 to 1994. According to the Environmental Protection Agency (EPA), a total of 3.3 billion pounds of active ingredients were produced for disinfectants in 1995. We found no estimates of production, sales, or usage of antiseptics. Overall accumulations of antibacterial residue in soil, water, and food are unknown. However, studies have shown that while some antibacterial drugs are rapidly degraded in soil, others remain in their active form indefinitely and that 70 to 80 percent of the drugs administered on fish farms end up in the environment. Antibacterial drugs are used to prevent and treat disease in humans. NCHS estimates that from 1980 until 1997, the U.S. antibacterial drug prescription rate remained approximately constant at about 150 prescriptions per 1,000 physician office visits (see table 5). Since 1992, NCHS has collected data on drugs prescribed in hospital emergency and outpatient departments. These data indicate that in 1996, the last year for which all data are available, antibacterial drugs were prescribed 19 million times a year in emergency departments and 8 million times a year in outpatient departments, for a total of 133 million prescriptions for physician office, hospital emergency, and outpatient settings combined. In general, use of antibacterial drugs differs among the countries that have been studied. (Most countries studied are developed countries, but India, South Africa, several Latin American nations, and other less developed countries have also been studied.) For example, Japan and Spain have higher rates of cephalosporin sales than do the other countries studied. The Danish Antimicrobial Resistance Monitoring and Research Programme has reported that antibiotic consumption in Denmark’s primary care sector declined from 12.8 defined daily doses per 1,000 population in 1994 to 11.3 in 1997. Available reports indicate that the amount of antibacterial drug use per person in some other developed countries, such as Canada, is greater than in the United States. In less developed countries—including Kenya, Bangladesh, and Nigeria—use of some antibacterial drugs tends to be relatively great for the segment of the population who can afford them. Antibacterial drugs are used to prevent and treat disease in food animals, pets, and plants. Antibacterial drugs, often the same ones used to prevent and treat disease in humans, are also used in veterinary medicine, fish farming, beekeeping, and agriculture. Veterinarians prescribe antibacterial drugs to treat disease in food animals, such as cattle and swine, and in companion animals, such as dogs and cats. A variety of antibacterial drugs are available without prescription in feed stores and pet stores. Fish farmers who raise fish, such as salmon, catfish, and trout, put antibacterial drugs in water to treat bacterial infection; and beekeepers use antibacterial drugs to prevent and treat bacterial infection in honeybees. Antibacterial drugs are also sprayed on some fruits and vegetables, such as pears and potatoes, as well as on other crops, such as rice and orchids. Chemical industry sources estimated that in 1985, the total weight of antibacterial drugs used to treat and prevent disease in cattle, swine, and poultry in the United States was 13.8 million pounds, but they have not published more recent estimates. Antibacterial drugs are used to enhance the growth of food animals and other commercially important animals. Antibacterial drugs are also often administered in the United States as feed additives to enhance growth and increase feed efficiency. As feed additives, they are primarily used for food animals, such as livestock and poultry, but they are also given to other commercially important animals, such as mink. Many antibacterial drugs used to promote growth can be purchased without a prescription. Chemical industry sources estimated that in 1985, 4.5 million pounds of antibacterial drugs were used for growth enhancement in cattle, swine, and poultry. Some other developed countries, such as Canada, also use antibacterial drugs for growth enhancement. However, because of concerns about antibacterial resistance, several countries have banned certain uses of some drugs or particular drugs altogether. For example, Sweden banned all antibacterials for use in animal feed without prescription, and the European Union banned several specific antibacterial feed additives. FDA has efforts under way to determine if similar actions are warranted in this country. Antibacterials are applied to various surfaces and environments to inhibit bacterial growth. Antibacterials are also used to disinfect various surfaces and environments in institutional settings, such as hospitals and laboratories; in industrial settings, such as food processing and manufacturing plants; and in environmental health settings, such as water treatment facilities. They are also used as antiseptics to disinfect skin and wounds. The presence of antibacterials in hundreds of consumer products, including soaps, cat litter, cutting boards, and even ballpoint pens, contributes to the public’s exposure to them. According to industry sources, almost 700 new antibacterial products were introduced between 1992 and the middle of 1998. Many of these, such as cribs and toys, are for use by children. The American Academy of Pediatrics’ Committee on Infectious Diseases is conducting a study of the use and safety of antibacterials in these products and other consumer products, such as hand soaps, that children may come into contact with. Antibacterial residues in some foods are monitored, but little is known about other residues. USDA inspects meat and poultry for antibacterial residues and reports on all samples with detectable levels. However, the levels of antibacterials in food that might promote resistance are not known and, therefore, cannot be factored into the current limits. USDA also regularly tests samples of fruits and vegetables for contamination by certain pesticides, such as insecticides, but not for antibacterials. EPA assesses risks of toxicity, but not antibacterial resistance, from residues on fruits and vegetables using data collected by USDA. Residues can also end up in water and soil. Studies in Europe have shown that antibacterials can be found in bodies of water that supply drinking water. However, we know neither the extent to which antibacterials in the environment promote the development of resistance nor how much antibacterial residue ends up in the environment or in food (with the exception of meat) or drinking water. A number of federal agencies and international organizations that receive U.S. funds collect information about the number of resistant infections, the prevalence of resistant bacteria, the cost of treating resistant disease, and the use of antibacterials; some ongoing efforts involve collaboration among several agencies. In addition, nearly two dozen agencies are coordinated under the Committee on International Science, Engineering, and Technology of the White House National Science and Technology Council to address the threat of emerging infectious diseases, which includes drug-resistant infections. Efforts to improve existing data sources and to create new ones are under way at several agencies, and we expect that over the next few years new information will allow better characterization of the public health burden. Several agencies also have data or access to data that, although not originally intended for these purposes, could be used to learn more about the numbers of resistant infections, treatment costs, and usage of antibacterials. Table 6 summarizes the ongoing and newly initiated efforts of agencies to collect information as well as potential data sources. Although many studies have documented cases of infections that are difficult to treat because they are caused by resistant bacteria, the full extent of the problem remains unknown. The development and spread of resistant bacteria worldwide and the widespread use of various antibacterials create the potential for the U.S. public health burden to increase. A number of federal and federally funded agencies are collecting information about different aspects of antibacterial resistance, and some ongoing efforts involve collaboration among agencies. However, there is little information about the extent of the following: common diseases that can be caused by resistant bacteria, are acquired in the community, and do not typically result in hospitalization, such as otitis media; the development of resistant properties in bacteria that do not normally cause disease but that can pass these properties on to bacteria that do; antibacterial use, particularly in animals, and antibacterial residues in places other than food; and the development of resistant disease and resistant bacteria and the use of antibacterials globally. Without improvements in existing data sources and more information in these areas, it is not possible to accurately assess the threat to the U.S. public health posed by resistant bacteria. As you have requested, we will be conducting further studies to (1) explore options for improving existing data sources and developing new ones; (2) identify the factors that contribute to the development and spread of antimicrobial resistance; and (3) consider alternatives for addressing the problem. We provided a draft of this report to CDC, EPA, FDA, the Health Care Financing Administration (HCFA), the National Institutes of Health (NIH), USDA, and to experts at other agencies. In general, the agencies agreed with our findings. The Department of Health and Human Services (HHS) concurred with the information and conclusions presented in the report but “is concerned that the draft report . . . is not as unequivocal as it could be in stating the gravity of the problem.” While we recognize that resistant bacteria threaten public health, we concluded that currently available data on the public health and economic consequences of antibacterial resistance are too limited for us to characterize the full extent of the problem. The agencies also provided technical or clarifying comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies to the Honorable Donna E. Shalala, Secretary of HHS; the Honorable Jeffrey Koplan, Director of CDC; the Honorable Jane Henney, Commissioner of FDA; the Honorable Nancy-Ann Min DeParle, Administrator of HCFA; the Honorable Harold Varmus, Director of NIH; the Honorable Carole Browner, Administrator of EPA; the Honorable Dan Glickman, Secretary of USDA; and other interested parties. We will make copies available to others upon request. If you or your staff have any questions, please contact me at (202) 512-7114 or Cynthia Bascetta, Associate Director, at (202) 512-7101. Other major contributors to this report are listed in appendix V. Although resistance has been observed in many kinds of microbes— including bacteria, viruses, parasites, and fungi—the scope of our work was limited to bacteria. The scope of our work was also limited to resistance to chemical antibacterials, although bacteria can be resistant to other phenomena, such as radiation or extremes of temperature. We focused on estimating the numbers of cases of illness and death caused by resistant bacteria and on estimating the costs of treating resistant infections; we did not, however, attempt to capture all aspects of the public health burden. Our focus is on what is known about the burden in the United States resulting from resistance, but we considered global developments in assessing the potential future burden. The federal efforts we examined include international activities assisted by federal funds. We did not attempt to examine all federal efforts related to antimicrobial resistance, but focused on efforts to collect and provide information on cases of resistant infections, resistance in bacteria, use of antibacterials, and the cost of treating resistant diseases. To conduct our work, we reviewed scientific and medical literature; identified sources of data; and consulted experts in government, including those at the Centers for Disease Control and Prevention (CDC), the National Institutes of Health, the Food and Drug Administration (FDA), the Health Care Financing Administration, the Agency for Health Care Policy and Research, the Environmental Protection Agency (EPA), the U.S. Department of Agriculture (USDA), the Department of Veterans Affairs, the Department of Defense (DOD), the U.S. Agency for International Development, and the World Health Organization. We also consulted experts in academia and private industry. We did not conduct our own statistical analyses to estimate the public health burden or independently verify the databases or analyses of others. We conducted our work between June 1998 and April 1999 in accordance with generally accepted government auditing standards. Bacteria are single-celled microbes that exist almost everywhere—in water, soil, plants, animals, and humans. They can transfer between hosts and be carried across borders through travel and trade. They typically live as members of communities of different organisms, such as fungi and algae. Bacteria and other microbes that normally occupy a particular niche are referred to collectively as the microflora of that niche. These organisms compete with each other for nutrients, oxygen, and space. Those that do not compete successfully are likely to be eliminated from the habitat. A foreign microbe usually has difficulty establishing itself in a stable community for this reason. Preventing foreign microbes from colonizing a site of the body is one of the most important benefits provided by normal microflora to their hosts. If an environmental disturbance, such as the introduction of an antibacterial drug, changes the balance of the community by killing the microflora susceptible to the effects of the drug, resistant foreign bacteria would have the opportunity to grow in the community and possibly cause disease. Most bacteria are harmless, and some are even useful to their hosts. For example, some bacteria normally found in the digestive tracts of animals and people help their hosts to digest nutrients that are important sources of energy, proteins, and vitamins. While most bacteria are benign, others are capable of causing disease. For example, E. coli O157:H7—which can be found in the feces of healthy cattle and can transfer to people through contaminated undercooked ground meat or unpasteurized milk products and juices—produces a toxin that causes severe stomach and bowel disorders and can result in failure of the blood-clotting system, acute kidney failure, and even death. The same bacteria that can cause disease in an individual may also be part of that individual’s normal microflora. Enterococcus faecalis is part of the microflora of the human intestine and, until recently, were generally considered harmless. These bacteria are harmless while they remain in the intestine, but when they enter the bloodstream through a wound or as a complication of invasive medical procedures, they can cause a blood infection. Like other living things, as bacteria grow and multiply, they also evolve and adapt to changes in their surroundings, which includes the introduction of antibacterial drugs into their environment. Some bacteria may have mutations in their DNA that allow them to avoid the effects of the antibacterial and outgrow the other bacteria in the population. They may also acquire plasmids—small, circular, self-replicating DNA molecules in addition to their own chromosomes—carrying genes that confer resistance to specific antibacterials. Like the bacteria that move freely between hosts and environments, these plasmids can be transferred from one bacterium to another within a species and sometimes between certain species of bacteria. Laboratories may use different types of antibacterial susceptibility tests, which can produce varying results. Discrepancies in test results can have therapeutic consequences if testing indicates that a particular type of bacteria will be susceptible to a specific antibacterial while, in practice, the drug fails to eliminate the infection. In general, however, the drug of choice usually can treat the susceptible strains successfully. Even in some instances where a susceptible organism is not killed, it is not necessarily a failure of the test to predict clinical susceptibility. Many other factors, including the site of the infection and the duration of treatment, can make a susceptible bacteria appear clinically resistant. In addition to the use of different tests to determine resistance, countries currently follow a number of laboratory standards for interpreting the test results. One study found that Scandinavia, Germany, the Netherlands, the United Kingdom, and France all follow different standards. Spain and some other southern European countries are mainly under the influence of the standards followed in the United States. Therefore, the breakpoints—where lines are drawn to distinguish between susceptible and intermediate resistance or intermediate resistance and high resistance—can differ among various countries around the world, although data sets should be comparable at laboratory facilities that use the same method and standards over time. In addition to determining the clinical effect of antibacterials against bacteria, antibacterial susceptibility tests are used to detect the emergence and spread of resistance. While there is a lack of routine testing and systematic data collection on antibacterial resistance globally, existing data on resistant bacteria in particular hosts and from specific geographic locations show that a variety of resistant bacteria can be found in people and animals in many different areas around the world. The level of resistance, however, can vary among settings and geographic areas. For example, while vancomycin-resistant Enterococcus (VRE) occurs in both hospitalized and nonhospitalized individuals in Europe, a study of healthy individuals; hospitalized patients; and farm animals in Houston, Texas, indicates that in the greater Houston metropolitan area, VRE is rare or nonexistent among nonhospitalized people. Similarly, investigators from the SENTRY Antimicrobial Surveillance Program found that the proportion of VRE isolated from the bloodstream of patients in the United States during a 6-month period was about 18 percent, while none of the Enterococcus samples from Canada were vancomycin resistant. Much of the testing and surveillance are also conducted predominantly on patient samples, so the data do not reflect the levels of resistance for bacteria in all other environments. These efforts, however, provide some information about where resistant bacteria can be found. For example, in Portugal, the prevalence of methicillin-resistant S. aureus has remained high at 50 to 65 percent in Portuguese hospitals between 1992 and 1995.In the United States, the National Antimicrobial Resistance Monitoring System—Enteric Bacteria, which tests Salmonella samples isolated from people, found that 21.7 percent of the Salmonella samples were resistant to streptomycin, while all were susceptible to ciprofloxacin. A DOD medical research unit in Peru tested disease-causing bacteria that affect the intestine and found that 38 percent of the Campylobacter samples were resistant to ciprofloxacin; 52 percent of the Shigella samples, 99 percent of the Salmonella samples, and 85 percent of the E. coli samples were resistant to azithromycin; and all Vibrio cholerae samples were sensitive to quinolones. CDC investigators tested Shigella from patients in outpatient clinics in Burundi and found that 100 percent were multidrug resistant. Testing of bacteria that colonize animals has also shown varying levels of resistance among different species of animals. For example, the April 1998 Report of the National Antimicrobial Resistance Monitoring System—Enteric Bacteria shows that for samples of Salmonella from sick animals, 75 percent of swine samples, 69 percent of turkey samples, 37 percent of cattle samples, 23 percent of horse samples, and 13 percent of chicken samples tested positive for resistance to tetracycline. The same samples were all susceptible to ciprofloxacin. Percentages were lower when samples from healthy animals are included. In the Netherlands, a study of bacterial samples taken from 23 dogs and 24 cats at an urban general veterinary practice showed that 48 percent of the dogs and 16 percent of the cats were colonized with VRE. This incidence of VRE in pets exceeded that among the people living in the same geographic area, which was 2 to 3 percent. In an effort to establish a baseline of resistance to therapeutic antibacterial agents among bacteria from food animals in Denmark, the Danish Integrated Antimicrobial Resistance Monitoring Programme tested indicator bacteria (such as E. coli and Enterococcus faecalis), zoonotic bacteria (such as Campylobacter jejuni), and animal pathogens (such as Actinobacillus pleuropneumoniae). The results from their study showed that resistance to all of the antibacterial agents can be found, although there were significant differences in the occurrence of resistance among different bacterial species. In addition to testing for resistance in bacterial samples from people and animals, some laboratories around the world are examining bacteria for the presence and transfer of specific resistance genes. Genetic exchanges do not occur indiscriminately within bacterial populations. Barriers to gene transfers—such as destruction of genes considered foreign by the host bacterium—can reduce the likelihood of successful transfer events. Nevertheless, data on the transfer of resistance genes between different kinds of bacteria can provide some information about where these genes may have been acquired and how they spread to different environments and geographic locations. A number of studies examining the DNA sequences of resistance genes show similarities among these genes in evolutionarily diverse bacteria, suggesting that some transfers have been occurring naturally between certain kinds of bacteria. For example, plasmids carrying resistance genes that were found in bacteria isolated from patients suffering from multiresistant Shigella infections on a Hopi Indian reservation in New Mexico appeared to come from multiresistant E. coli. Most studies on the exchange of resistance genes among different bacterial species have been conducted under laboratory-defined conditions. While some of these studies suggest that resistance genes can be transferred between certain species and even across bacterial genera,evidence of gene transfer in the laboratory demonstrates only that the transfer is possible, not whether that transfer will occur in nature. Many studies are also focused on bacteria isolated from patients. Even where there is surveillance for resistance, the surveillance systems tend to be limited to the monitoring of specific bacterial diseases, such as TB and gonorrhea, or disease-causing bacteria, such as S. pneumoniae. Therefore, less information is available on the prevalence of resistant genes in bacteria isolated from healthy people and that do not generally harm their primary host. Nevertheless, there is some evidence that resistance genes in these bacteria may play a role in the spread of antibacterial resistance. For example, an interspecies gene transfer appears to have occurred in the United States in 1979, when a multiresistant plasmid was identified in Kentucky in hospital patients and personnel infected with S. aureus. A year earlier, a like plasmid was isolated from Staphylococcus epidermidis on hospital patients, which suggests that the same plasmid was transferred from these bacteria to S. aureus. Bacteria from different body sites of one host may also exchange genes. For example, studies on tetracycline-resistant Bacteroides and Prevotella suggest that genetic exchange may occur between bacteria from the gastrointestinal tract and bacteria found in the mouth. In a study of gene transfers in simulated natural microenvironments, transfers were observed between bacteria from different hosts—cow E. coli to fish Aeromonas salmonicida in marine water, cow E. coli to human E. coli on a hand towel treated with cow’s milk, and pig E. coli to human E. coli on a cutting board. Resistant bacteria, therefore, are not only a potential cause of disease but also may be a source of resistance genes that can be transferred to benign and disease-causing bacteria of diverse origins. Antibacterials are recognized as major contributors in the development of antibacterial resistance. There are many kinds of antibacterials, varying in how they are used and in the agencies that have jurisdiction over them. Both the amount and usefulness of information on the quantities of antibacterials used are limited. Pharmacologists and physicians recognize several classes of antibacterial drugs that can differ in their mechanisms of action, killing, or inhibiting the growth of bacteria in varied ways. Therefore, for a given kind of bacterial infection in a human, a particular antibacterial drug will usually be the drug of choice—or first-line treatment—with one or more second-line treatments usually available if the drug of choice cannot be used or fails to stop the infection. The therapeutic uses of antibacterial drugs are well known, but their preventive role may be less appreciated. About half of all antibacterial drugs used on surgical patients in large hospitals are used to prevent possible infections. The percentage of the antibacterial drugs prescribed outside the hospital for preventive as opposed to therapeutic purposes is unknown. Antibacterial drugs are also used to prevent and treat disease in plants and animals and to promote growth in food animals. Antiseptics and disinfectants are also used for a variety of purposes. For example, phenolic compounds, such as triclosan, are used in hand soaps and toothpastes; nitrogen heterocycles are used as preservatives in cosmetics and other products; sulfur compounds are used as food preservatives; and gaseous sterilants are often used in hospitals on equipment that cannot be sterilized at high temperatures. Other commonly used antiseptics and disinfectants include chlorine; ethyl alcohol; formaldehyde; hydrogen peroxide; and metal compounds, such as mercurochrome. In the United States, all drugs introduced into interstate commerce, including antibacterials used in human and animal medicine, are subject to FDA approval. All pesticides, including antibacterial drugs used on plants, must be registered with EPA. Most antibacterial drugs for human use require a prescription, but a few that are topically applied are available without a prescription. In some other countries, however, antibacterial drugs for humans that act systemically may be available without a prescription. Some antibacterial drugs for animal use require a prescription, but some are available without a prescription in pet stores and feed stores. FDA determines whether a prescription is required. FDA also has jurisdiction over other antibacterials that come in direct contact with people, such as antiseptic hand soaps. EPA has jurisdiction over those that do not, such as detergents, antibacterials used to impregnate cutting boards, and gases used to sterilize equipment. Some products do not neatly fall under a single agency. FDA and EPA are attempting to clarify some of the “gray area” between their respective jurisdictions, with special attention to those products that may come in contact with food. FDA requires manufacturers to maintain distribution records, including quantity, for drug products administered to humans and animals. These data are required to be reported annually to FDA, but FDA does not compile them to yield estimates of aggregate antibacterial drug usage. FDA’s Center for Drug Evaluation and Research, which handles human drugs, expects that when it moves to a planned new computer system and requires certain changes to the way marketing information is submitted, preparation of such estimates will be easier. FDA’s Center for Veterinary Medicine, which handles animal drugs, has initiated some special postapproval programs to monitor the use of fluoroquinolone antibacterials in poultry and cattle. The center is also changing the way marketing information is submitted and enhancing its database to facilitate development of information on antibacterial usage generally. EPA requires producers of pesticides, some of which are antibacterials, to report annually on the amounts of pesticide produced, distributed, and sold during the past year. It has provided usage estimates for some kinds of antibacterial pesticides. We found some data on usage, but different sources of data capture use in different ways, such as weight produced, weight sold, amount sold in dollars, number of prescriptions, and number of doses. The U.S. International Trade Commission published annually the weights of all antibiotics (chemicals, not finished products) produced in the country from 1950 to 1994. These figures do not necessarily indicate the amount of antibiotics used domestically, as some produced here may have been exported, and some produced elsewhere may have been imported. Although there is some indication of an increase in production over the years, the figures sometimes fluctuate for unknown reasons. For example, from 1993 to 1994, the weight almost tripled, from nearly 29 million pounds to 83 million pounds. Such fluctuations suggest that these figures be interpreted with caution. Moreover, these figures reveal nothing about how much of each antibacterial drug is used in each setting at a given point in time and geographic location. Settings in human medicine using antibacterial drugs are ambulatory settings (physicians’ offices, emergency rooms, and outpatient clinics) and inpatient settings (hospital wards and rooms). The National Center for Health Statistics (NCHS) estimates the use of commonly prescribed drugs in ambulatory settings for the country as a whole and for large geographic regions. Since 1980, NCHS has periodically collected data on drugs prescribed in physicians’ offices as part of its series of National Ambulatory Medical Care Surveys. Since 1992, NCHS has also collected data on drugs prescribed in hospital emergency and outpatient departments as part of the National Hospital Ambulatory Medical Care Survey. While NCHS does not survey hospitals to obtain national estimates of antibacterial drug use in inpatients, such estimates can be derived by combining NCHS’ estimates of the average inpatient population and data from CDC’s Intensive Care Antimicrobial Resistance Epidemiology (ICARE) project, which obtains usage rates aggregated over most antibacterial drugs from its 41 participating hospitals. When rates from the ICARE survey are projected to the entire population of U.S. hospitals, it is estimated that about 82 million daily doses of antibacterial drugs were administered in hospitals in 1995. This figure is an underestimate to the extent that the survey does not include all antibacterial drugs, and it is an overestimate to the extent that the hospitals in ICARE’s sample probably tend to use more antibiotics than does the average hospital. Records from pharmaceutical companies and large health care insurers or health plans may also contain information on drug use in ambulatory care but are not generally available to the public. FDA has, for the purpose of studying adverse drug reactions, obtained usage data from IMS Health, a company that collects them and sells them to firms in the pharmaceutical industry and to other customers. FDA, in collaboration with GAO, analyzed these data to estimate ambulatory use. The resulting estimates tend to be higher than those derived from the NCHS data and, unlike the NCHS data, decline over the years from 1993 to 1997. The reasons for these discrepancies include methodological differences in data collection and analysis. Other potential sources for human usage data include agencies that provide health care, such as DOD, the Department of Veterans Affairs, the Health Care Financing Administration, and various private managed care and health insurance plans. These sources may not collect such data from all whom they serve or be able to provide nationally representative usage estimates, but the available data could be used to assess use in defined segments of the population. Companies that manufacture drugs for animals and plants do not usually publish production data, but the Animal Health Institute, an industry association, has released data on sales in dollars of antibacterials used in animals. In 1991, the last year for which the data were released, the amounts were $382 million for feed additives and $369 million for pharmaceuticals. Other data from the same source indicate that in the early 1980s, the total annual sales by weight for use in livestock and poultry varied between 10 million and 12 million pounds. Most cost-of-treatment studies are limited to infections acquired in hospitals—often in only one specific site of infection—and to a small number of cases in a single hospital. In addition, these studies generally use only hospital costs. The few exceptions that we identified are summarized below. A 1987 study reviewed 185 reports of investigations of bacterial infections in sporadic cases and outbreaks in hospital and community settings during the 1970s. According to the authors of the study, deaths, the likelihood of hospitalizations, and length of hospital stays were “usually at least twice as great” for patients infected with drug-resistant bacteria as for those infected with drug-susceptible bacteria. The study is limited by the small number of cases in any single outbreak report and by the small number of comparisons with case data on both antimicrobial susceptibility or resistance and length of hospital stay. A 1989 study developed an economic model to determine the potential magnitude of the problem posed by drug-resistant bacteria and the data needed to provide a more definitive statement about its extent. The author concluded that the annual cost resulting from the reduced effectiveness of antimicrobial drugs “appears to be at least $100 million and may exceed $30 billion.” The 300-fold range comes from the author’s use in the economic model of differing estimates of (1) the occurrence of resistant disease and its case fatality rates, (2) antibiotic use, and (3) the value of human life. A 1995 report by the now defunct Office of Technology Assessment (OTA)applied the 1987 twofold length of hospital stays to the charges for extra days of hospitalization in three hospitals in 1975 resulting from five kinds of hospital-acquired infections caused by six bacteria—the number of which were first extrapolated from a group of sentinel hospitals to all U.S. hospitals—and then reduced to the fraction that were drug-resistant in hospitals in CDC’s National Nosocomial Infections Surveillance system.Using an estimate of $661 million for the extra charges for hospitalization in 1992 for these proportions of the five kinds of hospital-acquired bacterial infections, OTA doubled the costs and concluded that the extra hospital costs associated with five drug-resistant, hospital-acquired bacterial infections is $1.3 billion per year. The major contributors to this report are Angela Choy, Donald Keller, Michele Orza, and Richard C. Weston. Others who contributed include Claude Adrien, George Bogart, Natalie Herzog, Lynne Holloway, Erin Lansburgh, Stuart Ryba, and Karen Sloan. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the potential threat to the public's health from antimicrobial resistant bacteria, focusing on: (1) what is known about the public health burden--in terms of illnesses, deaths and treatment costs--due to antimicrobial resistance; (2) potential future burden, given what is known about the development of resistance in microbes and usage of antimicrobials; and (3) federal efforts to gather and provide information about resistance. GAO noted that: (1) although many studies have documented cases of infections that are difficult to treat because they are caused by resistant bacteria, the full extent of the problem remains unknown; (2) GAO found many sources of information about the public health burden in the United States attributable to resistant bacteria, but each source has limitations and provides data on only part of the burden; (3) the public health burden attributable to resistant tuberculosis and gonorrhea is relatively well characterized because nationwide surveillance systems monitor these diseases; (4) little is known about the extent of most other diseases that can be caused by resistant bacteria, such as otitis media (middle ear infection), gastric ulcers, and cystitis (inflammation of the bladder) because they are not similarly monitored; (5) the development and spread of resistant bacteria worldwide and the widespread use of various antibacterials create the potential for the U.S. public health burden to increase; (6) data indicate that resistant bacteria are emerging around the world, that more kinds of bacteria are becoming resistant, and that bacteria are becoming resistant to multiple drugs; (7) while little information is publicly available about the actual quantities of antibacterials produced, used, and present in the environment, it is known that antibacterials are used extensively around the world in human and veterinary medicine, in agricultural production, and in industrial and household products and that they have been found in food, soil, and water; (8) a number of federal agencies and international organizations that receive U.S. funds collect information about different aspects of antibacterial resistance, and some ongoing efforts involve collaboration among agencies; (9) the Centers for Disease Control and Prevention (CDC) is the primary source of information about the number of infections caused by resistant bacteria; (10) CDC also collects information on resistance found in bacterial samples and the use of antibacterial drugs in human medicine; (11) CDC, the Department of Agriculture, and the Food and Drug Administration are collaborating on efforts to monitor resistant bacteria that can contaminate the food supply; (12) the Department of Defense conducts surveillance for antibacterial resistance at 13 military sites in the United States and at its 6 overseas laboratories; and (13) the World Health Organization serves as a clearinghouse for data on resistance in bacteria isolated from people and animals from many different countries.
The Economy Act, as amended (31 U.S.C. 1535), authorizes the head of an agency to place an order with another agency for goods or services if, among other requirements, a decision is made that the items or services cannot be obtained by contract as conveniently or cheaply from a commercial enterprise. The interagency ordering practice authorized by the Economy Act, sometimes referred to as “contract off-loading,” can save the government duplicative effort and costs when appropriately used. Examples of appropriate use may include circumstances of one agency already having a contract for goods and services similar to those needed by another agency, or an agency having unique capabilities or expertise that qualify it to enter into or administer a contract. In July 1993, the Subcommittee on Oversight of Government Management, Senate Governmental Affairs Committee, held a hearing to examine the practice of off-loading at federal agencies and the abuses of this practice. Its hearing record, which included testimony from the Inspectors General of DOD, the Department of Energy, and the Tennessee Valley Authority, was critical of DOD’s and other agencies’ off-loading practices. Subsequently, the National Defense Authorization Act for Fiscal Year 1994 required the Secretary of Defense to prescribe regulations governing DOD’s use of the Economy Act that included specific statutory limitations intended to rectify identified abuses. The Volpe Center is a federally owned and operated facility located in Cambridge, Massachusetts, and was established in 1970 to fulfill the need of the newly formed Department of Transportation for an in-house systems research capability. Since then, the center’s research, analysis, and project management expertise has been applied to a wide variety of transportation and logistics problems. Its only funding is through formal reimbursable agreements negotiated with individual agencies for specific tasks. Initially, the center’s services were provided almost exclusively to the Office of the Secretary of Transportation and the operating administrations within the Department of Transportation. As its capabilities evolved and its systems approach became better known, demand grew within non-Department of Transportation agencies. Through a formal memorandum of understanding with DOD, the Secretary broadened the center’s mission in 1985 to include work on transportation and logistics problems facing other agencies, including the Joint Chiefs of Staff and the U.S. Transportation Command. Similar arrangements were made with civilian agencies. The Volpe Center’s current labor pool consists of about 1,500 personnel evenly divided among 3 labor categories: federal employees, on-site contractor employees, and off-site contractor employees. On-site contractors provide services in computer analysis, technical information support, and documentation support. The off-site contractor employees comprise a “multiple contractor resource base,” which allows quick, competitive access to a broad range of high technology capabilities and skills needed to meet the Volpe Center’s programmatic requirements. Volpe Center contracting is regulated by the Federal Acquisition Regulation. In response to an audit conducted by the Department of Transportation’s Inspector General, the Volpe Center issued formal work acceptance criteria in February 1995. According to Volpe Center management, the criteria are designed to assure that the center will not accept projects unless it can make substantive contributions derived from its status as part of the federal government. Examples of substantive contributions include project definition and planning in cooperation with the requesting agency, and support of contracts awarded and administered by the Volpe Center. In advance of promulgating regulations, the Secretary issued a policy memorandum in February 1994 that imposed limitations on the use of Economy Act orders by DOD activities. The Secretary’s policy, which addressed Economy Act orders released outside of DOD for contract action, was, however, more stringent than either the National Defense Authorization Act for Fiscal Year 1994 or the Economy Act in the area of cost considerations by requiring a determination that the supplies or services cannot be provided “as conveniently and cheaply” by contracting directly with a private source. The Authorization Act did not address this cost issue and the Economy Act uses the phrase “as conveniently or cheaply.” The Secretary’s use of the “and” rather than the “or” introduces more cost analysis into the decision-making process. The Secretary also changed the level of approval authority for Economy Act purchases. Instead of having contracting officers or other officials designated by the agency head approve Economy Act transactions, the Secretary’s memorandum placed the approval level no lower than a senior executive service official, a general or flag officer, or an activity commander. The Coast Guard, which is a component of the Department of Transportation, has acquired services from the Volpe Center. In November 1994, the Coast Guard issued an instruction providing guidance on its use of the center. The instruction established a review, justification, and approval process to ensure that acquisition of Volpe Center services are in the Coast Guard’s best economic interest. The instruction designates the Director of Finance and Procurement, a senior executive service position within the Office of the Chief of Staff, as the approving official for all Coast Guard work performed through the center. The guidance requires a demonstration that the cost to use the Volpe Center is at least roughly comparable to commercial cost. To document this comparability, Coast Guard sponsors must develop an independent estimate of expected project costs using recognized techniques such as engineering analysis, market research, or application of actual cost data from prior projects. While the Coast Guard instruction acknowledges that the Volpe Center offers convenience, it is the Coast Guard policy that the center shall be used when there are clear economic, technical, and mission-essential reasons for doing so. For example, officials informed us that in one area of the country the Coast Guard is now completing 5 years of environmental compliance and restoration work with the Volpe Center. They explained that the center’s support was critical in the early years of this work for the Coast Guard to gain an understanding of the various technologies involved in restoring areas at Coast Guard installations that were contaminated. Coast Guard officials said it has acquired the technical expertise and it is now ready, at least in that area of the country, to transition away from the center for this work and contract directly with private companies. The Air Force, Army, and Navy have each taken a different approach to implementing the Secretary’s policy memorandum. Collectively, however, they are producing similar mixed results. While there is considerable up-to-date guidance available to contracting officials on interagency purchases, not all DOD files on Volpe Center projects we reviewed contained required information. In addition, DOD has not yet implemented a statutorily mandated monitoring system for interagency purchases; the monitoring system is currently scheduled for implementation in October 1995. The Air Force introduced the Secretary of Defense’s policy changes in June 1994 through a revision to its Federal Acquisition Regulation Supplement. The supplement states that the Air Force shall not place an order with another agency unless adequate supporting documentation, including a Determination and Finding (D&F), is prepared. The D&F must be approved at a level no lower than senior executive service, flag or general officer, or activity commander. The activity’s contracting office is required to retain a record copy of each D&F in a central file. The supplement offers a model format for the D&F, which requires that 12 specific findings be listed, including 1 that states “the supplies or services cannot be provided as conveniently and more economically by private contractors under an Air Force contract.” The Army implemented the Secretary’s policy changes in an August 1994 policy letter from the Office of the Assistant Secretary, Army Contracting Support Agency. The letter states that before an Economy Act order for supplies or services is released outside DOD for contracting action, a written determination prepared by the requiring activity that addresses the elements in the Defense Secretary’s memorandum shall be approved by the head of the requesting agency or their designee. The D&Fs are required to be prepared in the same format required by the Air Force, to include that “the supplies or services cannot be provided at the time required and more economically by contractors under an Army contract.” In contrast to the Air Force and Army’s delegation of approval authority, the Navy initially did not delegate approval authority below the Assistant Secretary of the Navy for Research, Development, and Acquisition. Toward the end of 1994, the Assistant Secretary delegated approval authority to the Deputy for Acquisition and Business Management. In January 1995, as permitted by the Secretary of Defense’s memorandum, the Deputy redelegated authority to approve D&Fs to eight activities with contracting authority. However, approval authority for Economy Act orders placed with the Volpe Center and with agencies not subject to the Federal Acquisition Regulation was retained by the Deputy for Acquisition and Business Management. Despite efforts by the services to strengthen controls over Economy Act purchases, our review of fiscal year 1995 Air Force, Army, and Navy projects with the Volpe Center indicated that the controls were not fully implemented. Of the 13 purchase requests we reviewed, 7 lacked approved D&Fs. The results of our review are summarized in table 1. In two of the three Air Force cases where a D&F was not prepared, the project managers were not aware of the requirement to prepare a D&F; in the other case, a draft D&F was prepared by the requiring activity, reviewed by a contracting officer, but never completed or signed. In the Army case where a D&F was not prepared, Army officials had no excuse other than they “just missed it.” One official suggested that some Army activities may not have understood the August 1994 policy letter. In one of the Navy cases without an approved D&F, ordering officials justified the transfer of 1995 funds on the basis of a D&F that covered 1993 and 1994 funding; subsequent to the transfer of funds, reviewing officials rejected this justification. The other two Navy cases involved purchases by a Marine Corps ordering activity. Similar to the Army case, Marine Corps officials explained that, regarding the preparation of D&Fs, the purchases “just fell through the cracks.” The documentation for services’ projects with approved D&Fs showed different approaches to meeting the Defense Secretary’s requirement to elevate the consideration of cost. The Air Force D&Fs mainly emphasized that the estimated general and administrative expense rate of 9 percent charged by the Volpe Center appeared reasonable and did not exceed the actual cost of entering into and administering the interagency agreement under which the order is filled. The Air Force documentation also showed that business reviews were performed by the contracting officers; the business reviews indicated that independent government cost estimates had been completed. The documentation for the approved Army project computed the dollar value of the administrative fee and included an “information paper” prepared for the general officer who signed the D&F. The information paper indicated that the project would be transitioning from Volpe Center support to the Army’s on-site contractor support in about 4 months. The approval for the two Navy cases involved purchases by a Marine Corps ordering activity different from the one above, which did not have approved D&Fs. These purchases were covered by a D&F prepared shortly after the Secretary’s memorandum, but approved under the criteria in effect before the memorandum. Thus, the D&F did not contain a finding on the cost comparison cited in the Secretary’s memorandum. However, Navy officials said that they concurred with approvals such as these because, at that time, no new detailed implementing guidance was available to ordering activities. The National Defense Authorization Act for Fiscal Year 1994 directed that DOD establish a monitoring system for Economy Act purchases not later than 1 year after the November 30, 1993, enactment of the act. That monitoring system has not yet been implemented. An official from the Office of the Under Secretary of Defense for Acquisition and Technology informed us, however, that the monitoring system has been developed and is now awaiting approval. The monitoring system is currently scheduled for implementation on October 1995. DOD Economy Act orders placed with the Volpe Center peaked in fiscal year 1991 at $93.2 million, which accounted for about 39 percent of the center’s budget. By fiscal year 1994, DOD funding dropped to $26.5 million, which accounted for about only 13 percent of the center’s budget. Funding transfers for the first 8 months of fiscal year 1995 indicate that DOD funding will only be one-half of fiscal year 1994 funding. The funding data are summarized in figure 1. It is difficult to pinpoint exact causes for the downward trend. However, the more recent declines may be a result of the 1993 Subcommittee hearing, resulting legislation, and the 1994 implementation of a more restrictive contracting environment by the Secretary of Defense. Coast Guard orders placed with the Volpe Center reached their highest levels in fiscal years 1992 and 1993 when over $21 million in new obligation authority was transferred each year. Funding dropped by almost half in fiscal year 1994. Fiscal year 1995 new obligation authority may be about half of the fiscal year 1994 total. The funding data are summarized in figure 2. As with the DOD data, it is difficult to identify exact causes for the downward trend. However, the November 1994 instruction with its cost and approval requirements may have been a contributing factor. FASA required that the Federal Acquisition Regulation be revised to include statutory requirements governing the exercise of Economy Act authority. The requirement is virtually identical to that required of DOD by the National Defense Authorization Act for Fiscal Year 1994. In March 1995, a proposed draft regulation was published in the Federal Register. The proposed regulation requires a determination that the ordered goods or services cannot be provided by contract as conveniently or cheaply by the requesting agency from a commercial enterprise. FASA did not require the more stringent “and” language applicable within DOD. The regulation authorizes determination approval authority to reside with the contracting officer or another official designated by agency regulation, except that if the servicing agency is not covered by the Federal Acquisition Regulation, approval authority may not be delegated below the senior procurement executive of the requesting agency. Such procedures are consistent with FASA. FASA also requires that by mid-October 1995 the Administrator for Federal Procurement Policy establish a monitoring system for Economy Act purchases for Federal civilian agencies, similar to the requirement for DOD. In commenting on a draft of this report, both the Departments of Defense and Transportation concurred with the report. Both suggested some technical changes to the draft, and we have incorporated them, where appropriate. DOD’s comments are presented in appendix I. The Department of Transportation’s comments were provided orally. We interviewed management officials and examined project management and budget documents, statements of work, cost summaries, military interdepartmental purchase requests, project plan agreements, and other program documentation. We performed work at the Department of Transportation’s Volpe National Transportation Systems Center, Cambridge, Massachusetts, and Headquarters, United States Coast Guard, Washington, D.C. We also contacted policy representatives within the Office of the Assistant Secretary of the Air Force for Acquisition; the Office of the Assistant Secretary of the Army for Research, Development, and Acquisition; and the Office of the Assistant Secretary of the Navy for Research, Development, and Acquisition. Our review was performed in accordance with generally accepted government auditing standards and includes information obtained through May 1995. We are sending copies of this report to the Chairman, Subcommittee on Oversight of Government Management and the District of Columbia, Senate Committee on Governmental Affairs; other interested congressional committees; and the Secretaries of Defense and Transportation. Copies will also be available to others on request. Please contact me at (202) 512-4587 if you or your staff have any questions concerning this report. Major contributors to this report were Charles W. Thompson, Paul M. Greeley, and Paul G. Williams. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO examined the: (1) impact of the Department of Defense's (DOD) policy changes for interagency orders on the Department of Transportation's Volpe National Transportation Systems Center; and (2) Coast Guard's recent initiatives and legislative changes extending the statutory requirements on interagency orders to other federal agencies. GAO found that: (1) because of past practices, the National Defense Authorization Act for Fiscal Year 1994 required the Secretary of Defense to issue regulations that strengthened controls over DOD's interagency orders for goods and services; (2) in a February 1994 memorandum, and in advance of the statutorily required regulations, the Secretary took additional steps to increase DOD's interagency transaction controls by requiring, among other things, that DOD's interagency orders be as convenient and cheap as other alternatives and approved at a level no lower than senior executive service, general officer, flag officer, or activity commander; (3) in November 1994, the Coast Guard independently developed reforms that paralleled these DOD initiatives; (4) DOD is still adjusting to the changes introduced by Congress and the Secretary; (5) there is an abundance of guidance available to Air Force, Army, and Navy contracting activities, but a sample of fiscal year (FY) 1995 Volpe Center purchases showed that not all files contained the information required by the Secretary's memorandum; (6) in addition, DOD has not yet implemented a statutorily mandated monitoring system for its interagency purchases; (7) the monitoring system is currently scheduled for implementation in October 1995; (8) DOD contracting with the Volpe Center has been declining since FY 1992; (9) while it is difficult to pinpoint exact causes for the downward trend, more recent declines appear to be a result of DOD's implementation of the more restrictive environment for interagency orders; (10) likewise, a similar recent decline in Coast Guard purchases at the Volpe Center appears to be related to the introduction of the Coast Guard reforms; (11) the Federal Acquisition Streamlining Act (FASA) generally extended the restrictive interagency transaction controls applicable to DOD to other federal agencies; and (12) the implementing draft regulation, while consistent with FASA, is not as stringent as the DOD or Coast Guard cost policies.
VA’s mission is to promote the health, welfare, and dignity of all veterans in recognition of their service to the nation by ensuring that they receive medical care, benefits, social support, and lasting memorials. It is the second largest federal department and, in addition to its central office located in Washington, D.C., has field offices throughout the United States, as well as the U.S. territories and the Philippines. The department has three major components that are primarily responsible for carrying out its mission: the Veterans Benefits Administration (VBA), which provides a variety of benefits to veterans and their families, including disability compensation, educational opportunities, assistance with home ownership, and life insurance; the Veterans Health Administration (VHA), which provides health care services, including primary care and specialized care, and performs research and development to serve veterans’ needs; and the National Cemetery Administration (NCA), which provides burial and memorial benefits to veterans and their families. Collectively, the three components rely on approximately 340,000 employees to provide services and benefits. These employees work in 167 VA medical centers, approximately 800 community-based outpatient clinics, 300 veterans centers, 56 regional offices, and 131 national and 90 state or tribal cemeteries. For fiscal year 2016, VA reported about $176 billion in net outlays, an increase of about $16 billion from the prior fiscal year. VBA and VHA account for about $102 billion (about 58 percent) and $72 billion (about 41 percent) of VA’s reported net outlays, respectively. The remaining net outlays were for NCA and VA’s administrative costs. The fiscal year 2017 appropriations act that covered VA provided approximately $177 billion to the agency, about a $14 billion increase from the prior fiscal year. As we recently reported, improper payments remain a significant and pervasive government-wide issue. Since fiscal year 2003—when certain agencies began reporting improper payments as required by IPIA— cumulative reported improper payment estimates have totaled over $1.2 trillion, as shown in figure 1. For fiscal year 2016, agencies reported improper payment estimates totaling $144.3 billion, an increase of over $7 billion from the prior year’s estimate of $136.7 billion. The reported estimated government-wide improper payment error rate was 5.1 percent of related program outlays. As shown in figures 2 and 3, the government-wide reported improper payment estimates—both dollar estimates and error rates—have increased over the past 3 years, largely because of increases in Medicaid’s reported improper payment estimates. For fiscal year 2016, overpayments accounted for approximately 93 percent of the government-wide reported improper payment estimate, according to www.paymentaccuracy.gov, with underpayments accounting for the remaining 7 percent. Although primarily concentrated in three areas (Medicare, Medicaid, and the Earned Income Tax Credit), the government-wide reported improper payment estimates for fiscal year 2016 were attributable to 112 programs spread among 22 agencies. (See fig. 4.) We found that not all agencies had developed improper payment estimates for all of the programs they identified as susceptible to significant improper payments. Eight agencies did not report improper payment estimates for 18 risk-susceptible programs. (See table 1.) As we have previously reported, the federal government faces multiple challenges that hinder its efforts to determine the full extent of and reduce improper payments. These challenges include potentially inaccurate risk assessments, agencies that do not report improper payment estimates for risk-susceptible programs or report unreliable or understated estimates, and noncompliance issues. For fiscal year 2016, VA’s reported improper payment estimate totaled $5.5 billion, an increase of about $500 million from the prior year. The reported VA improper payment error rate was 4.5 percent of related program outlays for fiscal year 2016, a slight increase from the 4.4 percent reported error rate for fiscal year 2015. As shown in table 2, VA’s Community Care and Purchased Long-Term Services and Support programs accounted for the majority of VA’s estimated improper payments. Specifically, for fiscal year 2016, VA’s reported improper payment estimate for VA’s Community Care was approximately $3.6 billion (about 65 percent of VA’s total reported improper payments estimate) and for VA’s Purchased Long-Term Services and Support was approximately $1.2 billion (about 22 percent of VA’s total reported improper payments estimate). As shown in figures 5 and 6, VA’s reported improper payment estimates have increased over the past 3 years, and the reported improper payment error rates have increased over the past 2 years. The significant increase in VA’s reported improper payment estimates and error rates primarily occurred, according to the VA OIG, because VA changed its sample evaluation procedures in fiscal year 2015, which resulted in more improper payments being identified. In response to a finding by the VA OIG, VA began classifying every payment as improper when it made a payment that did not follow all applicable Federal Acquisition Regulation (FAR) and Veterans Affairs Acquisition Regulation (VAAR) provisions. The OIG reported that when those purchases do not follow applicable legal requirements, such as having FAR-compliant contracts in place, the resulting payments are improper because they “should not have been made or were made in an incorrect amount under statutory, contractual, administrative, or other legally applicable requirements, according to the definition of improper payments set forth in OMB Circular A-123, Appendix C.” As a result of the change in its sample evaluation procedures, VA reported significant increases in estimated improper payments for both its Community Care and Purchased Long- Term Services and Support programs. As shown in table 3, VA’s Community Care and Purchased Long-Term Services and Support programs’ reported improper payment error rates are the two highest reported error rates government-wide for fiscal year 2016. Specifically, VA’s Community Care and Purchased Long-Term Services and Support programs had reported improper payment error rates of about 75.9 percent and 69.2 percent, respectively. The reported improper payment error rates for these two programs were each over 45 percentage points higher than the reported improper payment error rate for the next highest federal program—the Department of the Treasury’s Earned Income Tax Credit program. In its fiscal year 2016 agency financial report, VA did not report improper payment estimates for four programs it identified as susceptible to significant improper payments. These four programs were Communications, Utilities, and Other Rent; Medical Care Contracts and Agreements; VA Community Care Choice payments made from the Veterans Choice Fund. Because VA did not report improper payment estimates for these risk- susceptible programs, VA’s improper payment estimate is understated and the agency is hindered in its efforts to reduce improper payments in these programs. In its fiscal year 2016 agency financial report, VA stated that it will report improper payment estimates for these programs in its fiscal year 2017 agency financial report. According to OMB guidance, to reduce improper payments, VA can use root cause analysis to identify why improper payments are occurring and develop effective corrective actions to address those causes. In addition, our two prior reports identified problems with how VA processed its claims to reasonably assure the accuracy of or eligibility for the disability benefits, increasing the risk of improper payments. VA can implement our recommendations from these two reports to better ensure the accuracy of or eligibility for disability benefits. Root cause analysis is key to understanding why improper payments occur and to developing and implementing corrective actions to prevent them. In 2014, OMB established new guidance to assist agencies in better identifying the root causes of improper payments and assessing their relevant internal controls. Agencies across the federal government began reporting improper payments using these more detailed root cause categories for the first time in their fiscal year 2015 financial reports. Figure 7 shows the root causes of VA’s estimated improper payments for fiscal year 2016, as reported by VA. According to VA’s fiscal year 2016 agency financial report, the root cause for over three-fourths of VA’s reported fiscal year 2016 improper payment estimates was program design or structural issues. As noted above, most of the improper payments occurred in VA’s Community Care and Purchased Long-Term Services and Support programs. In the fiscal year 2016 agency financial report, VA provided details on how it plans to correct some program design issues by making its procurement practices compliant with relevant laws and regulations. The agency stated that it has made certain changes, such as issuing of new policies that can reduce the amount of improper payments in this area. For example, in VA’s fiscal year 2016 agency financial report, VA stated that it issued guidance in May 2015 to appropriately purchase care, such as hospital care or medical services, in the community through the use of VAAR- compliant contracts. VA stated that the implementation of this guidance is ongoing with full impact and compliance anticipated during fiscal year 2017. According to VA’s fiscal year 2016 agency financial report, the second largest root cause for VA’s reported improper payments was administrative or process errors made by the federal agency. VA reported that most of these errors occurred in its Compensation program. These errors, such as failure to reduce benefits appropriately, affected the payment amounts that veterans and beneficiaries received. To address this root cause, VA stated in its fiscal year 2016 agency financial report that it is updating procedural guidance to reflect such things as changes in legislation and policy. In addition, VA stated that it will train employees on specific subjects related to errors found during improper payment testing and quality reviews. Accurate claim decisions help ensure that VA is paying disability benefits only to those eligible for such benefits and in the correct amounts. Thus, it is critical that VA follows its claims processes accurately and consistently. However, we previously reported problems with how VA processed its claims to reasonably assure the accuracy of or eligibility for the disability benefits, increasing the risk of improper payments. In November 2014, we reported that while VA pays billions of dollars to millions of disabled veterans, there were problems with VA’s ability to ensure that claims were processed accurately and consistently by its regional offices. VA measures the accuracy of disability compensation claim decisions mainly through its Systematic Technical Accuracy Review (STAR). Specifically, for each of the regional offices, completed claims are randomly sampled each month and the data are used to produce estimates of the accuracy of all completed claims. In our November 2014 report, we reported that VA had not always followed generally accepted statistical practices when calculating accuracy rates through STAR reviews, resulting in imprecise performance information. We also identified shortcomings in quality review practices that could reduce their effectiveness. We made eight recommendations to VA to review the multiple sources of policy guidance available to claims processors and evaluate the effectiveness of quality assurance activities, among other things. In response to the draft report, VA agreed with each of our recommendations and identified steps it planned to take to implement them. To date, VA has implemented six of the report’s eight recommendations. For example, VA has revised its sampling methodology and has made its guidance more accessible. VA has initiated action on the remaining two recommendations related to quality review of the claims processes. VA reported that it is in the process of making systems modifications to its electronic claims processing system that will allow VA to identify deficiencies during the claims process. In addition, VA is developing a new quality assurance database that will capture data from all types of quality reviews at various stages of the claims process. VA stated that this new database will support increased data analysis capabilities and allow the agency to evaluate the effectiveness of quality assurance activities through improved and vigorous error rate trend analysis. VA stated that it anticipates deploying the systems modifications and the new quality assurance database by July 2017. In June 2015, we reported that VA’s procedures did not ensure that Total Disability Individual Unemployability (TDIU) benefit decisions were well-supported. To begin receiving and remain eligible for TDIU benefits, veterans must meet the income eligibility requirements. VA first determines a claimant’s income by requesting information on the last 5 years of employment on the claim form and subsequently requires beneficiaries to annually attest to any income changes. VA uses the information provided by claimants to request additional information from employers and, when possible, verifies the claimant’s reported income, especially for the year prior to applying for the benefit. In order to receive verification, VA sends a form to the employers identified on the veteran’s benefit claim and asks them to provide the amount of income earned by the veteran. However, VA officials indicated that employers provided the requested information only about 50 percent of the time. In our 2015 report, we reported that VA previously conducted audits of beneficiaries’ reported income by obtaining income verification matches from Internal Revenue Service (IRS) earnings data through an agreement with the Social Security Administration (SSA), but was no longer doing so despite the standing agreement. In 2012, VA suspended income verification matches in order to develop a new system that would allow for more frequent, electronic information sharing. VA officials told us that they planned to roll out a new electronic data system that would allow for compatibility with SSA data sources in fiscal year 2015. They noted that they planned to use this system to conduct more frequent and focused income verifications to help ensure beneficiaries’ continued entitlement. VA officials also anticipated being able to use the system to conduct income verifications for initial TDIU applicants. However, at the time of our 2015 report, VA could not provide us with a plan or timeline for implementing this verification system. In the 2015 report, we recommended that VA verify the self-reported income provided by veterans (1) applying for TDIU benefits and (2) undergoing the annual eligibility review process by comparing such information against IRS earnings data, which VA currently has access to for this purpose. To date, VA is developing processes to use IRS earnings data from SSA in verifying income eligibility requirements. According to VA, in February 2016, it launched a national workload distribution tool within its management system to improve its overall production capacity and assist with reaching claims processing goals that will be used in implementing our recommendation. To determine if new beneficiaries are eligible for TDIU benefits, VA stated that it is expanding the data-sharing agreement with SSA to develop an upfront verification process. Specifically, when VA receives a TDIU claim, it will electronically request the reported IRS income information from SSA and receive a response within 16 days. In addition, according to VA, it is also planning to begin a process for checking incomes of veterans to determine whether they remain eligible for TDIU benefits. Specifically, VA has reinstituted the data match agreement with SSA that was set to expire in December 2016 to allow VA to compare reported income earnings of TDIU beneficiaries to earnings actually received. According to VA, it also has drafted a new guidance manual for the annual eligibility review process. VA stated that it planned to fully implement the upfront and annual eligibility verification processes by the summer of 2017. In conclusion, in light of VA’s significant financial management challenges, we continue to be concerned about VA’s ability to reasonably ensure its resources are being used cost-effectively and efficiently. Because VA’s payment amounts are likely to increase with the increase in appropriations for fiscal year 2017, it is critical that VA takes actions to reduce the risks of improper payments. While VA has taken several actions to help prevent improper payments, further efforts are needed to help minimize the risks of improper payments across its programs. Chairman Bergman, Ranking Member Kuster, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. If you or your staff have any question about this testimony, please contact Beryl H. Davis, Director, Financial Management and Assurance, at (202) 512-2623 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Matthew Valenta (Assistant Director), Daniel Flavin (Analyst in Charge), Marcia Carlsen, Francine Delvecchio, Robert Hildebrandt, Melissa Jaynes, Jason Kelly, and Jason Kirwan. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
For several years, GAO has reported in its audit reports on the consolidated financial statements of the U.S. government that the federal government is unable to determine the full extent to which improper payments occur and reasonably assure that actions are taken to reduce them. Strong financial management practices, including effective internal control, are important for federal agencies to better detect and prevent improper payments. VA faces significant financial management challenges. In 2015, GAO designated VA health care as a high-risk area because of concern about VA's ability to ensure that its resources are being used cost effectively and efficiently to improve veterans' timely access to health care and to ensure the quality and safety of that care. Further, improving and modernizing federal disability programs has been on GAO's high-risk list since 2003, in part because of challenges that VA has faced in providing accurate, timely, and consistent disability decisions related to disability compensation. In addition, in VA's fiscal year 2016 agency financial report, the independent auditor cited material weaknesses in internal control over financial reporting. This statement discusses improper payments on both the government-wide level and at VA. The statement also discusses certain actions that VA has taken and other actions that VA can take to reduce improper payments. This statement is based on GAO's recent work on improper payments and its analysis of agency financial reports and VA's Office of Inspector General reports. Improper payments, which generally include payments that should not have been made, were made in the incorrect amount, or were not supported by sufficient documentation, remain a significant and pervasive government-wide issue. Since fiscal year 2003—when certain agencies began reporting improper payments as required by the Improper Payments Information Act of 2002—cumulative improper payment estimates have totaled over $1.2 trillion. For fiscal year 2016, agencies reported improper payment estimates totaling $144.3 billion, an increase of about $7.6 billion from the prior year's estimate of $136.7 billion. For fiscal year 2016, the Department of Veterans Affairs' (VA) reported improper payment estimate totaled $5.5 billion. VA's Community Care and Purchased Long-Term Services and Support programs accounted for reported improper payment estimates of $3.6 billion and $1.2 billion, respectively, or about 87 percent of VA's reported improper payment estimate for fiscal year 2016. VA's reported improper payment estimates increased significantly from $1.6 billion for fiscal year 2014 to $5.0 billion for fiscal year 2015. According to the VA Office of Inspector General, this increase was primarily due to a change in VA's evaluation procedures, which resulted in more improper payments being identified. In accordance with Office of Management and Budget guidance, to reduce improper payments, VA can use detailed root cause analysis to identify why improper payments are occurring and to develop corrective actions. For example, according to VA, the root cause for over 75 percent of VA's reported improper payments for fiscal year 2016 was program design or structural issues. Most of these errors occurred in VA's health care area. To reduce these improper payments, VA stated that it will make its procurement practices compliant with Federal Acquisition Regulation provisions. GAO has also recommended steps that VA can take to reduce the risk of improper payments related to disability benefits. For example, in November 2014, GAO reported that VA had shortcomings in quality review practices that could reduce its ability to ensure accurate and consistent processing of disability compensation claim decisions, and GAO made eight related recommendations to improve the program. To date, VA has implemented six of the report's eight recommendations and expects to implement the other two recommendations related to the effectiveness of quality assurance activities later this summer.
Long-term services and supports (LTSS) include many types of health and health-related services for individuals of all ages who have limited ability to care for themselves because of physical, cognitive, or mental disabilities or conditions. Individuals needing LTSS have varying degrees of difficulty performing activities of daily living (ADL), such as bathing, dressing, toileting, and eating, without assistance. They may also have difficulties with instrumental activities of daily living (IADL), such as preparing meals, housekeeping, using the telephone, and managing money. Assistance for such needs takes many forms and takes place in varied settings, including care provided in institutional settings, such as nursing homes; services provided in community-based settings, such as adult foster care; and in-home care. Home- and community-based services (HCBS) cover a wide range of services and supports to help individuals remain in their homes or live in a community setting, such as personal care services to provide assistance with ADLs or IADLs, assistive devices, respite care for care givers, and case management services to coordinate services and supports that may be provided from multiple sources. While a variety of sources are used to pay for LTSS, Medicaid is the largest. States and the federal government share responsibility for Medicaid costs. In general, state Medicaid spending for medical assistance is matched by the federal government, at a rate that is based in part on each state’s per capita income according to a formula established by law. The federal share of Medicaid expenditures, known as the federal medical assistance percentage (FMAP), typically ranges from 50 to 83 percent. Although Medicaid is jointly financed by the states and the federal government, it is directly administered by the states, with oversight from CMS, within the Department of Health and Human Services (HHS). For the most part, individuals who qualify for and receive Medicaid coverage of LTSS are age 65 or older, disabled, or blind. Such individuals typically qualify for Medicaid coverage of LTSS on the basis of their eligibility for the federal Supplemental Security Income (SSI) program, a means-tested income assistance program that provides cash benefits to individuals who meet certain disability criteria and have low levels of income and assets. States may also require individuals to meet state- defined level-of-care criteria for Medicaid coverage of certain LTSS. These criteria, which generally include some measures of an individual’s functional limits, help states manage overall service utilization and therefore costs. For decades, the majority of Medicaid LTSS expenditures have been for care provided in institutional settings, but Medicaid spending for HCBS has been steadily increasing as states invest more resources in alternatives to institutional care. Under Medicaid, coverage of certain institutional services is mandatory, while coverage of nearly all HCBS is optional for states. Since the Medicaid program was first established in 1965, states have been required to cover nursing facility care for all Medicaid beneficiaries age 21 and older. States may also offer other types of institutional care under their Medicaid programs, including care provided in intermediate-care facilities for individuals with intellectual disabilities and care provided for individuals age 65 or older and certain individuals under age 22 in institutions for mental diseases. Medicaid initially provided limited coverage for care provided in community settings or in the home, but numerous changes to federal Medicaid law since the program’s inception have expanded states’ options for covering HCBS. States have taken advantage of the new options, and since 1995, Medicaid spending for HCBS has steadily increased by 1 to 3 percentage points each year. In fiscal year 2009, total Medicaid expenditures for LTSS were $127.1 billion. Of this amount, about $55.9 billion was for HCBS, which was about 44 percent of all Medicaid LTSS spending that year, up from 18 percent in 1995. States’ ability to leverage federal Medicaid funding for the provision of HCBS can help them achieve compliance with the Olmstead decision, which outlined the scope and nature of states’ obligations to provide HCBS for individuals with disabilities; however, state spending on HCBS as a percentage of total LTSS spending varies widely. States have considerable flexibility in designing their Medicaid programs. Within broad federal guidelines, each state establishes its own eligibility standards; determines the type, amount, duration, and scope of covered services; and sets provider payment rates. In 2009, state spending on HCBS as a percentage of total LTSS spending ranged from 14.4 percent in Mississippi to 83.2 percent in New Mexico. (See fig. 1.) States have covered HCBS through a wide and complex range of options within Medicaid, including through state plan benefits and through waivers. A state Medicaid plan defines how the state will operate its Medicaid program, including which populations and services are covered. States are required by federal Medicaid law to cover certain mandatory benefits in their state Medicaid plan. For example, all states are required to offer the Home Health benefit to all individuals entitled to nursing facility coverage under the state’s Medicaid plan. Services that may be covered under this benefit include nursing, home health aides, medical equipment, and therapeutic services. States may also elect to cover other HCBS through optional benefits. For example, states have the option to offer the Personal Care benefit, which covers assistance with ADLs and IADLs, furnished either at home or in another location. According to a recent study, 33 states and the District of Columbia offered the Personal Care benefit in 2008. Changes a state wishes to make to its state Medicaid plan, including adding an optional state plan benefit, must be submitted to CMS for review and approval in the form of a proposed state plan amendment. With certain exceptions, services provided through state plan benefits (both mandatory and optional) must (1) be sufficient in amount, duration, and scope to reasonably achieve their purposes; (2) be comparable in availability among different groups of enrollees; (3) be offered statewide; and (4) allow beneficiaries freedom of choice among health care providers or managed care entities participating in Medicaid. States have also covered HCBS for Medicaid beneficiaries through waivers. Waivers can allow states to provide services not otherwise covered by Medicaid to designated populations who may or may not otherwise be eligible for Medicaid services. If approved, a waiver may allow a state to limit the availability of services geographically, target services to specific populations or conditions, control the number of individuals served, and cap overall expenditures—actions that are generally not otherwise allowed under the federal Medicaid law, but which may enable states to control costs. States must submit their waiver requests to CMS for approval. The 1915(c) waiver, authorized under section 1915(c) of the Social Security Act, is the primary means by which states provide HCBS for Medicaid beneficiaries and accounts for the large majority of state Medicaid HCBS expenditures. Under 1915(c) waivers, states may cover a broad range of services to participants, as long as these services are required to prevent institutionalization; thus to be eligible, individuals must meet the state’s level-of-care criteria for institutional care. Included among the services that may be provided are homemaker/home health aide, personal care, adult day health, and other services as approved by the Secretary of HHS. States can have multiple 1915(c) waivers that target different populations, for example, one for individuals with developmental disabilities and another for individuals with physical disabilities. In fiscal year 2010, 47 states and the District of Columbia operated 318 1915(c) waiver programs, expending over $35 billion, according to a study using CMS data. PPACA created two new Medicaid options for states to cover HCBS— Community First Choice and the Balancing Incentive Program—and amended two existing Medicaid HCBS options—the 1915(i) state plan option and Money Follows the Person. Community First Choice is a new optional state plan benefit created by PPACA to finance home- and community-based attendant and other services for Medicaid beneficiaries. Community First Choice became effective October 1, 2011. The Balancing Incentive Program is a new time-limited program established by PPACA to help increase access to HCBS for beneficiaries.October 1, 2011, and expires September 30, 2015. The Balancing Incentive Program became effective The 1915(i) state plan option was established by the Deficit Reduction Act of 2005 as a new optional state plan benefit under section 1915(i) of the Social Security Act. The 1915(i) state plan option provides states with a way to offer beneficiaries a comprehensive package of HCBS under a state plan option. One important distinction from 1915(c) waivers is that individuals qualifying for services under the 1915(i) state plan option do not need to meet the state’s institutional level of care criteria to receive HCBS. However, a state that offers services under the 1915(i) state plan option must establish needs- based criteria for determining eligibility for services under the option that are less stringent than the state’s criteria for determining eligibility for institutional care. Five states—Colorado, Iowa, Nevada, Washington, and Wisconsin—had offered 1915(i) prior to the changes to the option made by PPACA. These revisions included expansions to the scope of covered services and eligibility requirements, among other changes, and became effective October 1, 2010. Money Follows the Person was established by the Deficit Reduction Act of 2005 as a demonstration grant program to support states’ transition of eligible individuals who want to move from institutional settings back to the community. Each state’s Money Follows the Person program consists of a transition program, to identify Medicaid beneficiaries living in institutions who wish to live in the community and help them do so, and a rebalancing program for states to make systemwide changes to support Medicaid beneficiaries with disabilities living and receiving services in the community. A total of $1.75 billion in federal funds was appropriated for Money Follows the Person for fiscal years 2007 through 2011, and CMS awarded Money Follows the Person grants to 30 states and the District of Columbia in 2007. PPACA extended the program through 2016 and provided additional funding to continue the demonstration. The changes made by PPACA, which included an expansion of the eligibility requirements, became effective April 22, 2010. The four PPACA options include new incentives and flexibilities to help states increase the availability of HCBS for Medicaid beneficiaries. Three of the options—Community First Choice, Balancing Incentive Program, and Money Follows the Person—provide states with financial incentives in the form of enhanced federal matching funds for HCBS. All four options allow states flexibility in designing their coverage of services and implementing HCBS. For example, the revised 1915(i) state plan option allows states to design benefit packages to meet the needs of particular groups. In addition, three of the options have maintenance of effort or eligibility requirements that require states to sustain or increase HCBS expenditures or maintain existing eligibility standards, methodologies, or procedures as a condition of receiving enhanced federal funding, which should help to ensure that the options increase the availability of services. These options also include evaluation components or data reporting requirements that may help discern the extent to which the options have increased the availability of HCBS for beneficiaries. For a summary of specific features of the four options, see appendix I. Community First Choice provides incentives for states to finance attendant and other services. Community First Choice provides states with a 6 percentage point increase in their FMAP for home- and community-based attendant and other services provided to beneficiaries. Under the benefit, states must cover services to help individuals accomplish ADLs and IADLs and health-related tasks and services to support the acquisition or maintenance of skills necessary for individuals to accomplish ADLs and IADLs.must also cover back-up systems, such as personal emergency response systems, pagers, or other mobile electronic devices, to ensure continuity of services in the event that providers of services and supports are not available. States must also cover voluntary training for individuals on how to select, manage, and dismiss their personal attendants. Community First Choice also allows states the flexibility of covering transition costs, such as rent and utility deposits, and other expenditures that allow for greater independence, such as nonmedical transportation services. Beyond personal care services, states PPACA included several requirements for Community First Choice. Structured as a state plan benefit, Community First Choice does not allow states to set ceilings on the number of people who can receive services and requires services to be offered statewide. Further, unlike other HCBS options that states may use to cover personal care services, such as 1915(c) waivers and the 1915(i) state plan option, which allow states significant flexibility to restrict the type of services available, Community First Choice requires states to provide a specified set of HCBS. CMS described Community First Choice as a “robust” service package. Also, states offering Community First Choice must adhere to maintenance of effort requirements. Specifically, for the first full fiscal year the option is implemented, participating states must maintain or exceed the preceding year’s level of expenditures for personal care services. Additionally, data reporting requirements included in the law may shed some light on the extent to which states are covering additional individuals as a result of the option. States that offer Community First Choice must report the number of individuals who received services under the option the preceding fiscal year and whether they had been previously served under the state plan or waivers, such as the personal care benefit, 1915(c) waivers, and 1915(i) state plan benefit. PPACA also requires the Secretary of HHS to conduct an evaluation of Community First Choice to determine (1) the effectiveness of the provision of services in allowing individuals to lead independent lives, (2) the impact of the services on individuals’ physical and emotional health, and (3) the cost of services provided under the option compared with the cost of institutional care. Balancing Incentive Program incentivizes certain states to rebalance their LTSS systems toward home- and community-based care. The Balancing Incentive Program offers a targeted increase in FMAP to states in which less than 50 percent of LTSS expenditures are for HCBS and that undertake certain structural reforms to their Medicaid programs to increase access to HCBS. Under the program, states that spent under 25 percent of the LTSS expenditures on HCBS in fiscal year 2009 qualify for a 5 percentage point increase in their FMAP for state HCBS expenditures, and states that spent between 25 and 50 percent are Participating states are eligible for a 2 percentage point increase.required to make three structural changes to their LTSS programs to help increase access to HCBS. They must establish (1) a “no wrong door/ single-entry point system” to enable consumers to access all long-term services and supports; (2) conflict-free case management services in which the persons responsible for assessing the need for services and developing plans of care are not related to or financially responsible for the individual, or are not a provider of services for the individual; and (3) a standardized assessment instrument to determine eligibility for HCBS. States receiving a 5 percentage point increase in FMAP must achieve a rebalancing benchmark of 25 percent of total Medicaid LTSS expenditures for HCBS by the program’s end, September 30, 2015; and similarly, states receiving a 2 percentage point increase in FMAP must achieve a rebalancing benchmark of 50 percent by then. PPACA set a limit of $3 billion in enhanced FMAP payments for the Balancing Incentive Program; funds from enhanced FMAP must be used to provide new or expanded offerings of HCBS. States participating in the Balancing Incentive Program must meet maintenance of eligibility requirements that prohibit the state from applying methodologies or procedures for determining eligibility for HCBS that are more restrictive than the eligibility methodologies or procedures in effect on December 31, 2010. In addition, states must collect data on services, quality, and outcomes and inform CMS on a quarterly basis how they are collecting these data. Outcome measures to be collected include measures of beneficiary and family caregiver experience with providers and satisfaction with services; and measures for achieving desired outcomes appropriate to a specific beneficiary, including employment, participation in community life, health stability, and prevention of loss in function. PPACA revisions to 1915(i) state plan option provide increased flexibility to offer new services to targeted populations. While several features of the 1915(i) state plan option remain the same—including its availability to individuals not needing an institutional level of care and its lack of an enhanced matching rate—PPACA made several changes to the option that provide states with increased flexibility in designing their benefit packages. First, PPACA expanded the range of services previously available under the 1915(i) benefit. Formerly, states that offered the 1915(i) could cover only those services explicitly identified in the statute, which among other services included homemaker/health aide, case management, personal care, and respite care. PPACA revised the option to allow states to offer services not specifically identified in the law if approved by CMS, as they are able to do under 1915(c) waivers. Second, as a result of the changes in PPACA, states are able to offer HCBS to specific, targeted populations. States may offer 1915(i) service packages that differ in type, amount, duration, or scope to specific population groups, either through one service package or through multiple 1915(i) service packages. For example, a state could have one 1915(i) benefit package specifically for individuals with chronic mental illness and another for children with autism. Third, PPACA expanded income eligibility for the option by allowing states to offer the benefit to individuals with incomes up to 300 percent of the SSI benefit rate if they are also eligible for HCBS under certain waivers, which may require the individual to meet the state’s institutional level of care criteria. The law also allows states to expand Medicaid eligibility to individuals with income up to 150 percent of the federal poverty level who are eligible to receive HCBS under the 1915(i) state plan option. Although PPACA provided new flexibility to states under the 1915(i) option, the law also eliminated the ability states had previously under 1915(i) to limit the number of individuals who could receive services and to offer services in selected geographic areas. States that offer 1915(i) are required to report the number of individuals projected to be served under the option. PPACA extension of Money Follows the Person included additional funding and some new flexibility. PPACA extended the Money Follows the Person demonstration program, which was scheduled to expire in 2011, for 5 years through fiscal year 2016. PPACA appropriated $450 million for the program annually for each of fiscal years 2012 through 2016, for a total of $2.25 billion. Most features of the demonstration program were unchanged by PPACA, including the program’s enhanced FMAP of up to 90 percent for certain services for 12 months for each Medicaid beneficiary transitioned. One change PPACA did make was to relax one of the eligibility requirements for Money Follows the Person. Under the original program, an individual had to reside for not less than 6 months but no more than 2 years in an inpatient facility, such as a nursing facility, to be eligible to receive services. PPACA shortened the minimum number of days from 6 months to 90 consecutive days. Some of the initial Money Follows the Person grantees reported that the 6-month institutional residency requirement was a barrier to recruitment because many candidates interested in transitioning had not been institutionalized long enough to qualify and individuals who do meet the requirement often have complex medical or mental health needs that make it more difficult to serve them in the community. Some states have transition programs that have less stringent institutional residency requirements. The reduction in the institutional residency requirement in Money Follows the Person may potentially increase the number of individuals who can be transitioned through the program. PPACA made no changes to a maintenance of effort requirement included in the original demonstration. Under the program, a state’s expenditures for HCBS in each year of the demonstration must not be less than such expenditures for fiscal year 2005, or for the fiscal year preceding the first year of the demonstration, whichever is greater. PPACA also extended the national evaluation of Money Follows the Person, which was designed to assess whether the demonstration had met its goals to increase the number of institutionalized Medicaid beneficiaries who can be transitioned to the community and to rebalance states’ LTSS systems. The Deficit Reduction Act of 2005 allowed up to $1.1 million of the funds appropriated for Money Follows the Person each fiscal year to be used for the evaluation through 2011; PPACA extended the program’s evaluation and the funding for it through 2016. See appendix II for more information on the national evaluation of the program and the results to date. Thirteen of the 20 states that had not previously received Money Follows the Person grants applied for and received new grants made available as a result of funds appropriated in PPACA. In addition, states were beginning to apply and applications had been approved for the other three PPACA HCBS options. In February 2011, CMS awarded Money Follows the Person grants to 13 of the 20 states that had not previously received Money Follows the A total of $621 million was Person grants under the original program.awarded to these 13 states and will be available to these states through fiscal year 2016. The amounts awarded varied from a low of approximately $6.5 million for Idaho to a high of approximately $187 million for Minnesota. By April 2012, most of the 13 states were making some progress implementing their Money Follows the Person programs, as evidenced by CMS’s approval to allow the states to begin enrolling and transitioning individuals to their homes or the community. When applying for Money Follows the Person grants, states must submit operational protocols to CMS that detail how the states plan to implement their programs. Once CMS has approved a state’s operational protocol, the state can begin enrolling and transitioning individuals from institutions to the community. As of April 2012, CMS had approved operational protocols for 11 of the 13 states.not long after their grants were awarded in February 2011 and thus could Some states received approval of their operational protocols begin transitioning individuals at that time, while others received approval much later. See table 1 for information on the amounts awarded to the 13 states and the dates on which these states could begin transitioning individuals. The 11 states with approved operational protocols planned to transition approximately 8,800 individuals from institutions to their homes or communities between 2011 and 2016. Individual states projected transitioning from 122 individuals (Maine) to 2,225 individuals (Tennessee) during the course of the demonstration. The 11 states planned to target a variety of populations to transition, including individuals age 65 or older and individuals with physical disabilities, developmental or intellectual disabilities, or mental illness. About half of the individuals the states planned to transition are age 65 or older, but most states planned to target three or more populations. For example, Maine planned to transition older adults; adults with physical disabilities; and persons with any complex combination of medical, behavioral, and cognitive impairment. (See table 2.) These 11 states with approved operational protocols planned to provide a broad range of Money Follows the Person demonstration services— program-specific services provided only to Money Follows the Person participants and not to other Medicaid beneficiaries—to help individuals transition to home- and community-based settings.Nevada planned to offer transition navigation, community transition services, environmental accessibility adaptation, housing coordination, and personal emergency response systems. Idaho planned to provide community transition services and transition management services. (See appendix III for information on the demonstration and supplemental Money Follows the Person services that states planned to provide.) While many states’ operational protocols were approved in 2011, some had not planned to transition, or did not start transitioning, individuals to the community until 2012. During the original Money Follows the Person demonstration, it took longer than states had planned to build the necessary infrastructure for their programs, including establishing channels of coordination across state agencies, garnering community and provider support, and building data reporting and quality assurance systems. Additionally, transitioning individuals out of institutions was more complex than many states had anticipated, in part due to the scarcity of appropriate housing options and the complex needs of the population. According to CMS officials, 4 of the 13 states that had been awarded grants in 2011 had completed 215 transitions as of March 2012. In February 2012, CMS announced that it would award additional Money Follows the Person grants, open to the seven states that had not previously received a grant. The agency issued two solicitations—one for a planning grant, to help states prepare their grant application (including a draft operational protocol), and the other for the actual demonstration. CMS officials reported that three states (Alabama, Montana, and South Dakota) of these seven had applied and been awarded planning grants. CMS provides states with technical assistance for Money Follows the Person through an online technical assistance website. provided guidance to states on the extension of the demonstration in a June 2010 State Medicaid Directors’ Letter. As of April 2012, states have begun to apply for the newly established Community First Choice, the Balancing Incentive Program and the revised 1915(i) option, and applications have been approved for the Balancing Incentive Program and 1915(i) options. As of April 2012—6 months after the option first became effective and before CMS had issued final program guidance—one state, California, has applied for Community First Choice. According to California’s application, the state plans to provide services required under the statute related to assistance with ADLs, IADLs, and health-related tasks. California’s application indicated that the state had proposed to transition eligible individuals from the state plan personal care benefit to the Community First Choice program. CMS officials told us that, at least initially, California planned to maintain its state plan personal care services program, which would allow individuals to receive personal care services if they decide not to receive such services under the Community First Choice option. As of April 2012, California’s proposed state plan amendment had not been approved by CMS and thus could change as a result of the review process. The Money Follows the Person technical assistance website is http://www.mfp-tac.com/. Community First Choice option. States have asked CMS questions pertaining to program eligibility, data collection, and quality improvement requirements, among others. Additionally, some states have had questions about replacing their state plan personal care services benefit with Community First Choice. For example, Maryland is interested in consolidating personal care services available under three existing state Medicaid programs—the state plan personal care benefit and two waiver programs—under Community First Choice. CMS officials said that states may have been waiting for the final rule before applying for Community First Choice. CMS issued a proposed rule for Community First Choice in February 2011. Although the Community First Choice option became effective on October 1, 2011, CMS only recently published a final rule implementing the program on May 7, 2012.states, there is no deadline for states to apply for it. Since Community First Choice is a permanent Medicaid option for As of April 2012—6 months after the program first became effective and 16 months before the application deadline—two states had applied for and received CMS approval to participate in the Balancing Incentive Program. One of the states approved, New Hampshire, was awarded the full amount of enhanced matching funds it requested from CMS for the program—$26.5 million. The requested amount was based on total projected community-based LTSS expenditures of $1.32 billion from January 1, 2012, through September 30, 2015. In fiscal year 2009, New Hampshire spent 41.2 percent of its LTSS expenditures on HCBS, and the state expects to get to 50 percent by September 30, 2015. The state plans to use the Balancing Incentive Program funds to support the design and implementation of LTSS enhancements, help develop a community infrastructure across the state, and strengthen the community-based network of services across the continuum of care and populations in New Hampshire. Another state, Maryland, was awarded $106.34 million in enhanced matching funds for its Balancing Incentive Program, based on the state’s total projected HCBS expenditures. Maryland plans to use the Balancing Incentive Program funds to further expand community capacity. Specifically, the state plans to use the funds to improve provider payment rates for personal care providers. As of April 2012, two additional states— Georgia and Missouri—had also applied for grants under the program. Other states have expressed interest in the Balancing Incentive Program. According to CMS, a dozen additional states have requested technical assistance, in particular regarding CMS’s expectations for the required LTSS structural changes. The Balancing Incentive Program became effective October 1, 2011, and states have until August 1, 2014, to apply or until the $3 billion in authorized funds have been expended, whichever is earlier. CMS has provided several types of guidance to states about the Balancing Incentive Program, including a letter to state Medicaid directors, an implementation manual, and a technical assistance website. As of April 2012—18 months after PPACA’s changes to the option became effective—three states had submitted state plan amendments and received CMS approval to offer the revised 1915(i) state plan option. Under the approved amendments, the three states—Idaho, Oregon, and Louisiana—plan to target children with developmental disabilities or individuals with mental illness. Idaho’s 1915(i) program became effective in July 2011, and the state plans to add HCBS services for children with developmental disabilities. To be eligible, a child must require assistance due to substantial limitations in three or more major life care activities and have a need for interdisciplinary services because of a delay in developing age- appropriate skills. The state plans to serve approximately 3,200 individuals during the first year of its program. Oregon’s 1915(i) program will become effective in June 2012, and the state plans to provide home- and community-based habilitation services, as well as home- and community-based psychosocial rehabilitation services for individuals with chronic mental illness. Eligibility is limited to individuals who need assistance for at least 1 hour per day to perform two personal care services and who are not eligible for such services under the state’s 1915(c) waiver. Oregon plans to serve approximately 3,000 individuals during the first year of its program (June 1, 2012, through May 31, 2013). Louisiana’s 1915(i) program became effective on March 1, 2012, and the state plans to provide psychosocial services to adults with mental illness, including adults with acute stabilization needs, serious mental illness, and major mental disorders. The state plans to limit the option to adults who exhibit at least a moderate level of risk of harm to self and others and moderate levels of need based on a standardized assessment tool. The state plans to provide such services under the 1915(i) option to a much higher number of individuals than either Idaho or Oregon—55,000 during the first year of its program. In addition to the states with approved 1915(i) state plan amendments, four states—California, Connecticut, Florida, and North Carolina— currently have 1915(i) applications under review with CMS, according to officials. Proposals in California and Florida—which had not been approved by CMS, as of May 2012, and thus could change as a result of the review process—showed varying plans for targeted groups and services proposed, as the following examples illustrate. Florida proposed to provide various types of family therapy services to redirect troubled youth away from residential placements and into treatment options that will allow them to live at home. The state plans to serve 597 children in the first year. California has submitted two 1915(i) state plan amendments. The first proposes to target infants and toddlers with developmental delays and would provide a 1-day session with families to prepare the children for school or other appropriate facilities, which is currently funded with state-only funds. California anticipates serving 3,800 in the first year. The second proposes to target developmentally disabled individuals with a need for habilitation services. Services to be provided would include community living arrangement services, respite care, and day services. The state anticipates serving 42,000 in the first year. The changes made by PPACA to section 1915(i) became effective October 1, 2010. CMS provided guidance to states about the changes in an August 2010 letter to state Medicaid directors. CMS published a proposed rule for the 1915(i) state plan option on May 3, 2012. Medicaid officials in the states we selected for our study reported being attracted to the enhanced federal matching funds available under three of the PPACA options, but also expressed concern about the potential effect on budgets given continuing fiscal challenges at the state level. Further, Medicaid officials cited limited staff availability to research or implement these options. Officials were also considering broader Medicaid reforms occurring in the state and the potential interaction with existing HCBS. Officials from the 10 states we contacted for our study reported they are considering the new HCBS options with an eye to how they might affect their state’s budget. States, in general, continue to experience fiscal challenges, and the state officials we talked with noted that while they are attracted by the enhanced federal matching funds that come with Community First Choice and the Balancing Incentive Program especially, there were limits as to how much the state can contribute. Officials from 8 of the 10 states we selected reported that state budget considerations were either a general concern when evaluating any potential new HCBS option or a specific concern regarding Community First Choice or the Balancing Incentive Program. A state official in Mississippi noted that her first consideration of a new Medicaid option is how much the federal government is providing in funding and for how long. She said that she needs to determine what the cost will be to the state now, and if applicable, what the cost to the state would be once the enhanced federal matching rate ends. Regarding the Balancing Incentive Program, Nevada officials similarly reported that while the state is eligible for the 2 percent enhanced federal match, it does not have the money to build the infrastructure, quality assurance system, and financial tracking system called for by the program. Although the enhanced federal matching rate in Community First Choice was attractive to several states, they also noted potential financial risk caused by the inability to limit the program’s enrollment or utilization. Officials in half of the states we interviewed noted concerns about a potential inability to control expenditures in Community First Choice given the requirement that the option be offered statewide and the prohibition on state enrollment and utilization caps. Mississippi officials reported that their main problem with pursuing the option is the inability to limit potential state expenditures. Officials from the National Association of State Directors of Developmental Disabilities Services reported that the fact that there is no way for states to cap Community First Choice deters states from taking up the option. States wonder how to keep such a program within their budgets if they cannot limit either enrollment or utilization. In contrast, more than half of the state officials we interviewed found the 1915(i) state plan option attractive because of the ability to limit the provision of services to specific populations, thus providing the state with the opportunity to limit state financial exposure. In considering the options that would provide the most federal funding possible, officials from a few states told us that when they initially looked at Community First Choice it was to replace existing state options that do not qualify for enhanced federal matching rates. Oregon officials noted that if they chose to use Community First Choice, which provides a 6 percent enhanced federal matching rate, it would be as a replacement for one of the state’s existing 1915(c) waivers. Expenditures under 1915(c) waivers qualify for the standard federal matching rate. However, the state officials did not think that the Community First Choice option would allow them to cover all the services in their 1915(c) waiver, which would then require the state to cover these services with state-only funds or drop them altogether. Officials from Nevada similarly reported that they initially considered using Community First Choice as a replacement for the state’s existing self-directed personal care state plan option. While the state has not ruled out taking up Community First Choice, the officials thought that the administrative requirements included in Community First Choice, specifically the requirements for backup systems and the establishment of a Development and Implementation Council to engage stakeholders, as well as the additional reporting requirements, meant that Community First Choice would not be a cost-effective replacement for its existing self-directed personal care option. According to state officials, staffing shortages in a number of states have made it difficult for states to review all the new HCBS options in depth or put together the teams needed to assemble applications and implement the options. Officials from New Mexico told us that they previously had a hiring freeze and have a current staff vacancy rate of about 9 percent. They said their current staff of 190 runs a $4 billion Medicaid program, which already included a personal care option, a Money Follows the Person program, and a managed care program for LTSS. The officials said that if they decided to pursue, for instance, the Community First Choice option, they would have to use these same staff to implement and oversee the program, including writing the state plan amendment, obtaining public input, and shepherding the amendment through the CMS approval process. According to the New Mexico officials, the current staff already has too much work. Officials in Maine told us that the state recently offered retirement incentives to staff as a cost-saving measure. Under the retirement incentive policy, positions that are open because of the incentive cannot be filled for 2 years. The state is also under a hiring freeze. Officials from the National Association of Medicaid Directors reported that state Medicaid programs are running with a fraction of their prior staff. Given this, officials from the association said states may not even have enough staff to put together an application. State officials also reported that the time involved in making other changes to their state Medicaid programs as a result of PPACA has prevented their staff from doing in-depth research on the new HCBS options. Officials from two states specifically said they had not had enough time to research the opportunities in full as a result of their other work. Nevada officials, for instance, noted that staff is working on developing the Health Home state plan option in PPACA, which allows states to provide for care coordination for persons with chronic conditions or serious mental illness, and is making other PPACA-required changes to its Medicaid program. The officials reported prioritizing all the state requirements in PPACA and said the Balancing Incentive Program keeps dropping down the list. Similarly, officials in Montana said the HCBS options were a lot to consider at the same time states are facing many other changes as a result of PPACA, including accommodating a large number of new individuals expected to become eligible for Medicaid.National Association of Medicaid Directors officials reported that PPACA contained both state mandates and options and that therefore states needed to triage where they invest staff resources. They also noted that they would expect states to invest resources in mandated changes rather than the optional changes, such as the new HCBS options. Officials in several of the states we interviewed reported putting off decisions about the HCBS options in PPACA until they completed major reforms to their Medicaid programs. Four of the 10 states we contacted reported being in the midst of or planning for broad Medicaid reforms. This situation is consistent with national trends. One national survey of states found that 11 were planning to implement a managed care system New Jersey, for long-term services and supports in either 2012 or 2013.for example, was in the midst of planning for the transition of its Medicaid program, including LTSS, to managed care. Under the proposal submitted to CMS, managed care organizations would take over responsibility for care, including HCBS and nursing home care, for individuals who are enrolled in one of several of the state’s HCBS waivers, who require a nursing home-level of care, or who reside in a nursing home. The managed care organizations would be required to develop and implement an annual person-centered plan of care and individual service agreement for each individual requiring LTSS and would have authority to place an individual in the most cost-effective setting, whether a home- or community-based setting or a nursing home. The managed care organization, however, would also be expected to emphasize services that are provided in members’ homes and communities in order to prevent or delay institutionalization whenever possible. At the time we spoke with New Jersey officials, the state was awaiting CMS’s decision on the proposal. Given the planned transition of LTSS to managed care, the New Jersey officials did not think applying for the 1915(i) option at this time made sense, and their decision on whether to apply for Community First Choice would depend on how their managed care system looked if approved by CMS. Similarly, Florida was also moving to statewide Medicaid managed care. Officials in the state told us that they had not explored the Balancing Incentive Program or Community First Choice because the state Medicaid agency’s primary focus has been on the transition to statewide managed care and the time and resources they have devoted to the transition have prevented them from exploring the new HCBS options. States also factored in how easily the new HCBS options would fit in with their existing HCBS programs, according to state officials. States that decided to take up some of the new HCBS options reported doing so because they complemented existing HCBS options. Four of the five states we interviewed that received the Money Follows the Person grant following the initial post-PPACA solicitation told us the state had an existing transition program to move individuals from institutions into the community. Each of these states told us that Money Follows the Person would be a supplement to their existing programs and would provide the state with additional federal funds. Officials from Nevada, for example, told us that while the state had an existing state-funded community transition program, they thought the Money Follows the Person program would give the state the opportunity to target more difficult populations that could still benefit from community placement. The state plans to use the Money Follows the Person rebalancing fund to integrate the state’s various HCBS case management systems and expand outreach. Similarly, New Jersey state officials told us they planned to apply for the Balancing Incentive Program because it fit in well with the state’s existing efforts to rebalance LTSS funding toward HCBS. The state officials told us that, in part, the state’s move to a managed care model reflects an effort to increase the availability of HCBS in the state. Because the managed care organizations assume financial risk, the state officials believed the organizations would have an incentive to increase HCBS placements, which are generally less costly than institutional placements. State officials said one reason states were interested in taking up the 1915(i) state plan option is that it offers the opportunity to provide services to people who could not necessarily be served under other HCBS options. Officials in both Oregon and Montana said they were looking at the 1915(i) state plan option to provide a set of services for adults with serious mental illness or children with serious emotional disorders who cannot be targeted under a 1915(c) waiver either because of its cost neutrality requirement or because the individuals do not meet an institutional level of care. While the selected states were more likely to find the new HCBS options attractive if they complemented existing options or offered the opportunity to serve new populations, state officials also noted the complexity of layering new HCBS options on top of their state’s existing HCBS system. Nevada officials told us that each waiver and each program the state operates is its own silo, with each requiring its own reporting structure, provider enrollment system, and quality assurance system. As such, the Nevada officials told us that they were already reporting to CMS on four 1915(c) waivers, the personal care state plan option, and a 1915(i) state plan option. Each of those, according to the Nevada officials, came with its own set of requirements. Mississippi officials said that, when looking at how the four PPACA HCBS options relate to each other, as well as to existing HCBS options, it becomes hard not only for state staff, but also for providers and beneficiaries, to work out the differences in all the different programs. They said they would like CMS to send out guidance about how states could use these different options together, instead of issuing guidance on each option separately. CMS officials told us they have recently undertaken a number of initiatives to help states coordinate and align the different Medicaid HCBS options. While the CMS officials noted a number of efforts to align the options, they also noted a natural trade-off between giving states maximum flexibility and simplifying the number of different HCBS options available to states. In February 2011, CMS established Medicaid State Technical Assistance Teams (MSTAT), which consist of CMS staff with knowledge of Medicaid financing, eligibility, coverage, waivers, and state- specific issues. The teams work with individual states to assist in any area a state has identified or to help states identify specific program areas that may yield efficiencies. According to CMS, as of April 2012, 27 states have used MSTATs, and a majority of those have included at least some discussion of the various HCBS options. In addition to the MSTATs, CMS officials told us they offer technical assistance to states in several areas. For example, there is a specific technical assistance provider that can help states build quality measurement into their systems that can work across the different options. CMS staff also has presented information during all-state conference calls and at an annual HCBS conference to help states learn about the different options and how they can work together. CMS recently formed a work group consisting of representatives from the National Association of Medicaid Directors, the National Association of States United for Aging and Disabilities, and the National Association of State Directors of Developmental Disabilities Services, as well as officials from 10 states, including 14 HCBS waiver administrators, to focus in part on developing quality in a systems approach as opposed to within individual 1915(c) waivers. In addition, in April 2012, HHS announced the establishment of a new agency—the Administration for Community Living—which will combine the efforts of several HHS agencies for the purpose of enhancing and strengthening HHS’s efforts to support seniors and people with disabilities and ensuring consistency and coordination in community living policy across the federal government. In the 13 years since the Olmstead decision, states have continued to make progress rebalancing their LTSS systems toward more HCBS, increasing opportunities for individuals who need LTSS to live more independent lives in the community. The four Medicaid HCBS options established or revised by PPACA add to the array of options states have to consider in designing their coverage of services for beneficiaries. Some states that are further along in rebalancing their provision of LTSS may have less need to utilize these new options. Other states have further to go in determining whether and how to incorporate these options into their existing programs and have many factors to weigh, including their state budgets and the coverage and flexibility the options provide to reach their rebalancing goals. The complexities of the Medicaid HCBS options available and the changing factors affecting states’ planning underscore the importance of ongoing federal technical assistance to help states navigate various HCBS options as they seek to ensure appropriate availability of HCBS. We provided a draft of this report to HHS for review. HHS had no general comments on the report but provided technical comments, which we incorporated as appropriate. As arranged with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services, the Administrator of the Centers for Medicare & Medicaid Services, and appropriate congressional committees. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. The Patient Protection and Affordable Care Act (PPACA) created two new options—Community First Choice and the Balancing Incentive Program—and amended two existing options—1915(i) state plan option and Money Follows the Person—for states to cover home- and community-based services (HCBS) for Medicaid beneficiaries. Table 3 summarizes components of the four options. In 2005, Money Follows the Person was established as a demonstration grant program to support states’ transition of eligible individuals who want to move from institutional settings—such as nursing homes or intermediate care facilities for the intellectually disabled—back to their homes or the community. The Centers for Medicare & Medicaid Services (CMS) awarded Money Follows the Person grants to 30 states and the District of Columbia as part of the original round of funding in 2007. The Patient Protection and Affordable Care Act extended the demonstration through 2016 and provided additional funding to support the original Money Follows the Person state grantees and to award grants to additional states. While these newer grantees are just beginning to implement their Money Follows the Person programs, the national evaluation contractor has released results from the original round of grantees. According to CMS officials, results from the Money Follows the Person evaluation show that since the program’s inception in 2007, participating states had transitioned over 20,000 individuals to the community as of December 31, 2011. Some states were initially slow to transition individuals to the community through the Money Follows the Person program because they encountered problems or delays in meeting federal planning and data reporting requirements and challenges identifying affordable and accessible housing. experience transitioning individuals to the community through existing transition programs generally were able to complete more transitions than states without such programs, in part due to availability of staff with transition experience. Over time, the number of transitions per year has been steadily increasing, with cumulative transitions totaling nearly 1,500 in 2008, 5,700 in 2009, and 12,000 in 2010. N. Denny-Brown, and D. Lipson, “Early Implementation Experiences of State MFP Programs,” The National Evaluation of the Money Follows the Person (MFP) Demonstration Grant Program, Reports from the Field, no. 3, Mathematica Policy Research, Inc. (Cambridge, Mass.: November 2009). The original 30 grantees used the Money Follows the Person program to transition different kinds of institutional residents. Approximately 37 percent of individuals transitioned through June 2011 were under age 65 and had physical disabilities, 34 percent were elderly, 25 percent had intellectual disabilities, and the remainder had other characteristics or conditions that were unknown. According to the national Money Follows the Person evaluation contractor, the percentage of total transitions by elderly individuals and individuals under age 65 with physical disabilities has been increasing since 2008, while the percentage of transitions by individuals with intellectual disabilities has decreased during the same time frame.ongoing initiatives to move individuals with intellectual disabilities out of intermediate care facilities for the intellectually disabled at the start of the demonstration. Therefore, individuals with intellectual disabilities were some of the first to start transitioning. Since then, more individuals in the other target populations have begun transitioning. The evaluation contractor noted that many states had The large majority of individuals who have transitioned to the community through the Money Follows the Person program remained in the community for at least 1 year after their transition. For individuals for whom, as of 2010, more than 1 year had passed since their transitions (4,746 participants), 85 percent remained in the community more than 1 year after their transition, 9 percent had been reinstitutionalized in a nursing home or other institutional setting for stays of 30 days or more, and 6 percent had died. Those who did return to an institution tended to do so in the first 6 months, most likely in the first 3 months. The annual per-person HCBS costs of Money Follow the Person participants were nearly $40,000 during the first year of community living. Costs were generally the least for the elderly, about $20,000 per year, and the highest for those with intellectual disabilities, about $75,000 per year. Across all populations, monthly HCBS costs were significantly higher during the first month after an individual’s transition. Monthly expenditures during the first 30 days after the initial transition were, on average, more than 50 percent higher than those for the remainder of the year. Many of these costs include services specific to the transition—such as transition planning and coordination—which are only needed in the short term. The costs incurred after the first 30 days are more likely to reflect the costs associated with ongoing care needed for individuals to remain in the community for the long term. Overall, early evaluation results indicated that average annual spending on HCBS for Money Follows the Person program participants was about one-third lower than average annual Medicaid spending on institutional care for elderly individuals in nursing homes. The evaluation noted that further analyses, which take into account total health care costs, including hospitalizations and emergency room visits, would be needed before the cost- effectiveness of the program could be determined. Under the Money Follows the Person demonstration program, participating states can cover demonstration and supplemental home- and community-based services (HCBS), in addition to HCBS available to other beneficiaries under the state Medicaid plan or through waivers. Demonstration HCBS are services specific to Money Follows the Person, provided only to participants in the demonstration and not to other Medicaid beneficiaries, and are covered only during a participant’s 12- month transition period. Enhanced matching funds are available for demonstration HCBS. Supplemental HCBS are services essential for successful transition to the community, are expected to be required only during the transition period or to be a one-time cost to the program, and are typically not Medicaid-covered services. Supplemental HCBS are reimbursed at the state’s regular Medicaid matching rate. Table 4 provides information on the 13 states awarded Money Follows the Person grants in 2011, including the names of the demonstration programs and information on the demonstration and supplemental services that the states planned to provide. In addition to the contact named above, Catina Bradley, Assistant Director; Lori Achman; Sandra C. George; Jawaria Gilani; Linda McIver; and Roseanne Price made key contributions to this report.
The 1999 Supreme Court decision in Olmstead v. L.C. held that states must serve individuals with disabilities in community-based settings under certain circumstances. Under the joint federal and state Medicaid program, states are required to cover nursing facility care for eligible individuals, while the provision of most HCBS is optional. In 2010, PPACA created two new options and revised two existing options for states to cover HCBS for Medicaid beneficiaries. GAO was asked to assess the implementation status of the four Medicaid HCBS options in PPACA. GAO assessed (1) how the four options are structured to increase the availability of services, (2) what is known about states’ plans to use the options, and (3) factors affecting states’ decisions regarding implementing the options. To determine the structure of the options, GAO reviewed federal statutes and regulations and interviewed officials at CMS. To determine what is known about states’ plans, GAO obtained copies of states’ grant applications and state plan amendments. To understand factors affecting states’ decisions, GAO conducted interviews with officials in 10 states. The states were selected to reflect a range of state Medicaid spending for HCBS as a percentage of total Medicaid expenditures for long-term services and supports. GAO provided a draft of this report to HHS. HHS had no general comments on the report but provided technical comments, which GAO incorporated as appropriate. The four Medicaid options for home- and community-based services (HCBS) included in the Patient Protection and Affordable Care Act (PPACA) provide states with new incentives and flexibilities to help increase the availability of services for Medicaid beneficiaries. Two of the options were newly created by PPACA, and the other two were existing options amended by the law. Three of the options provide states with financial incentives in the form of enhancements to the Medicaid matching rate that determines the federal share of the program’s costs. As of April 2012, 13 states had applied for and received Money Follows the Person grants, in addition to the 30 states and the District of Columbia that had received grants prior to PPACA, and states were beginning to apply for the other three options. The 13 new Money Follows the Person states were awarded $621 million and were in various stages of implementation. One state had applied for Community First Choice. Two states had received approval to participate in the Balancing Incentive Program, and the Centers for Medicare & Medicaid Services (CMS) was reviewing two additional state applications. Three states had received approval to offer the revised1915(i) state plan option since PPACA’s enactment. The 10 states GAO contacted reported considering several factors in deciding whether to pursue the PPACA options, including potential effects on state budgets, staff availability, and interaction with existing state Medicaid efforts. States were attracted by the increased federal funding available under some of the options, but were concerned about their ability to contribute their share of funding. Limited staff resources and competing priorities were also concerns. Finally, broader Medicaid reform efforts, such as transitions to statewide managed care, and the potential interaction with existing HCBS options factored into states’ considerations. The Department of Health and Human Services (HHS) and CMS have initiatives under way to assist states with their HCBS efforts. The complexities of the Medicaid HCBS options available and the changing factors affecting states’ planning underscore the importance of ongoing federal technical assistance to help states navigate various HCBS options as they seek to ensure appropriate availability of HCBS.
From 1987-1997, U.S. diplomatic facilities overseas were attacked on more than 200 occasions. On August 7, 1998, terrorist bombings of the U.S. embassies in Dar es Salaam, Tanzania, and Nairobi, Kenya, killed 220 people and injured thousands more. Subsequent investigations into these attacks and on the conditions of U.S. overseas facilities determined that U.S. embassies and consulates worldwide were insecure, unsafe, overcrowded, deteriorating, and “shockingly shabby.” Unless security vulnerabilities were addressed, employees and the public using these facilities would remain at risk of terrorist attacks. In the wake of these reports, State embarked on an unprecedented effort to construct diplomatic facilities at 214 overseas posts. The goal of this effort is to replace insecure, dilapidated, and dysfunctional embassies, consulates, and other overseas diplomatic office buildings with safe, secure, functional, and modern facilities as quickly as possible. As of December 2008, OBO had completed construction for 64 new embassies, consulates, and annexes and had relocated more than 19,500 U.S. employees into these new facilities. State has 31 additional ongoing construction contracts for new facilities and plans to build approximately 90 more facilities from 2009 to 2023. Beyond this effort, State officials said that after 2023, OBO would need to replace facilities at approximately 50 posts. The total award value for all construction contracts for new office facilities awarded since 1999 is approximately $5.8 billion. In 1986, in response to terrorist threats, State began an embassy construction program, known as the Inman program, to better protect U.S. personnel and facilities overseas. However, due to systemic weaknesses in program management, as well as subsequent funding limitations, State completed only 24 of the 57 construction projects planned under the Inman program. Following the demise of the Inman program in the early 1990s, State initiated very few new construction projects, until the 1998 embassy bombings in Africa prompted additional funding for security upgrades and the construction of secure embassies and consulates. In response to the performance problems experienced under the Inman program, State implemented numerous reforms to its management structure and contracting, planning, and construction processes. These reforms were designed to speed completion of projects, reduce costs, and standardize processes, and they had the cumulative effect of reducing the average construction cycle time by 2 years and 9 months. Among the most prominent reforms were elevating the former Office of Foreign Buildings Operations to OBO; relying on the design-build delivery method, which reduces the number of solicitation, proposal, and award processes from two to one and allows contractors to begin basic construction before the design process is completed; convening the Industry Advisory Panel on a quarterly basis to advise OBO on industry best practices in the construction sector; and holding an annual industry day event to solicit a broader pool of contractors. Starting in 2002, OBO also implemented the Standard Embassy Design (SED) to expedite the planning, awarding, design, and construction of NECs. The SED is a series of documents that outline site and building plans, specifications, and design criteria, and explain how to adapt these specifications to a particular project and contract requirements. The SED is not an actual building design but rather a template that standardizes the basic plans for the structural, spatial, safety, and security requirements for each NEC, including the following: main office buildings and annexes; security features, such as the Compound Access Control (CAC) buildings utility buildings, warehouses, and General Services annex; living quarters for Marine Security Guards (MSGQ); and employee and visitor parking. The SED also identifies ways to allow for future building expansion on the site; establishes minimum permissible baseline standards for materials and interior finishes; and factors in environmental concerns such as temperature, humidity, dust, rain, and air quality when designing and selecting mechanical equipment. Figure 1 shows the general features for a standard design NEC. Since 2002, there have been three primary classes of standard design embassy and consulate compounds—small, medium, and large—based on the size and cost of the facility, each of which have predefined construction schedules and total project durations associated with them. In 2004, State introduced a fourth class of SED, called Extra Large or Special SEDs, which generally exceed the size and cost of large SEDs. Finally, in 2007, State introduced the Standard Secure Mini Compound, which is generally smaller and less costly than a small SED. In addition, OBO has developed standard designs for MSGQs, and stand-alone unclassified annexes. Table 1 shows the allowable size and construction time frames for each of the five classes of NECs constructed using the standard embassy design. The Omnibus Diplomatic Security and Antiterrorism Act of 1986 states that, where adequate competition exists, only U.S. persons and qualified U.S. joint-venture persons may (1) bid on diplomatic construction or design projects with estimated total project values exceeding $10 million and (2) bid on diplomatic construction or design projects involving technical security, unless the project involves low-level technology, as determined by the Secretary of State. The act defines adequate competition as the presence of two or more qualified bidders submitting responsive bids for a specific project. In this context, a U.S. person is defined, in part, as a company that is incorporated or legally organized under the laws of the United States; has its principal place of business in the United States; has performed within the United States or at a U.S. diplomatic or consular establishment abroad administrative and technical, professional, or construction services similar in complexity, type, and value to the project being bid; has total business volume equal to or greater than the value of the project being bid in 3 years of the 5-year period before the specified date; employs U.S. citizens (1) in at least 80 percent of its principal management positions in the United States and (2) in more than half of its permanent, full-time positions in the United States; will employ U.S. citizens in at least 80 percent of the supervisory positions on the project site; and has the existing technical and financial resources in the United States to perform the contract. Contracts for construction projects that do not involve technical security requirements may be awarded to foreign firms. However, the Percy Amendment to the Foreign Buildings Act of 1926 enables American firms to be more competitive with foreign firms by reducing the evaluated price of offers from American firms by 10 percent for such projects expected to exceed $5 million. In 2007, State proposed an amendment to the Omnibus Diplomatic Security and Antiterrorism Act of 1986 that would allow the Secretary of State to waive financial, U.S. citizenship, and other requirements for NEC awards, when necessary and appropriate. According to State, the proposed amendment was necessary because “the current pool of American contractors qualified and able to carry out diplomatic construction projects overseas has nearly reached its capacity” and the subsequent reduced competition for contracts would result in increased contract costs. When proposing the amendment, State argued that amending the law would increase competition for NEC awards by opening the contractor pool to smaller U.S. construction companies and to foreign companies that previously could not qualify for NEC projects. Congress did not act on the proposed amendment. In December 2008, State officials told us the department plans to revise the 2007 proposal by opening competition for NECs only to U.S. companies that meet the specified security requirements. State uses a two-phase solicitation process for awarding contracts for NECs. In the first phase, the prequalification of offerors, contractors submit documentation attesting how they meet the legal, technical, and financial qualifications for each project on which they wish to bid. State then reviews this documentation to certify whether contractors do, in fact, meet the criteria. Once State completes these reviews, it issues a list of contractors eligible to bid for each contract award. Only companies that State certifies as prequalified under the first phase receive, and may respond to, subsequent requests for proposals (RFP) for major construction awards. In the second phase, RFPs, State solicits and evaluates contractors’ bids for construction awards, including technical and price proposals. Contractors bid a firm, fixed price for a project; therefore, the winning contractor will deliver the defined scope of the project for the price of the contract. After State awards a design-build contract, the contractor must develop a project design and work plan that incorporates all construction and security features outlined in the RFP and contract documents. During this design phase, the contractor must also begin preparing for construction by obtaining local building permits, buying or ordering materials, and mobilizing workers. In addition, under the design-build delivery method, contractors can begin construction of some buildings and systems that do not require security clearances—such as perimeter walls, warehouses, and mechanical support buildings—before the full design is approved. However, construction of the main office building—the chancery or consulate—generally does not proceed until the design is approved and State certifies to Congress that it meets all security requirements. During the construction phase, OBO monitors contractors’ schedules, inspects and reviews contractors’ work, and certifies that construction is substantially complete once contractors meet all requirements of the contract. Once construction is certified as being substantially complete, State conducts final commissioning to ensure that building systems—such as fire protection, electrical, and mechanical—were installed properly and operate according to design criteria and manufacturer specifications. Once all systems pass the commissioning process, the building is certified to be occupied and post staff may move in. In September 2008, State reported that construction costs had increased dramatically since 2001 and that the trend was likely to continue. State reported that from 2001-2008 total construction costs for new embassy and consulate compounds increased, on average, 9 percent per year, from approximately $5,000 per gross square meter in 2001 to more than $13,000 per gross square meter in 2008. In an earlier analysis, State attributed the overall cost increases to two factors: inflation for construction materials and the decrease in the value of the dollar. State reported that, overall, prices for construction materials rose 44 percent from December 2003 to July 2008. In addition, State reported that the significant decline in the value of the dollar resulted in additional construction-cost increases of approximately 2 percent per year since 2003. Although State has generally received at least two bids for NEC projects since 1999, which meets the adequate competition clause, the number of contractors participating in the State’s program has declined. State documents show, and State officials reported, that from 1999 to 2008, the department received at least two bids for all but one of the 61 NEC projects awarded through a competitive process, and three or more bids for at least 49 of the 61 awards. Table 2 shows the number of firms prequalifying to bid on NEC projects and the number of bids submitted for each NEC project from 2002 to 2008. Despite having adequate competition for all but one NEC award, we found a statistically significant decline in the number of bids State received per NEC contract from 2002 to 2008. We also found that the number of firms per project prequalified to bid also declined during that period. These results demonstrate that the level of contractor participation in the NEC program has declined. In addition, from 2002 to 2008, noticeable fluctuations occurred in both the annual average numbers of firms per project that prequalified to bid on NECs and bids received. From 2002 to 2005, the annual average number of firms that prequalified to bid ranged from approximately 6 to approximately 8 (see table 2). In 2006, the average increased to more than 13, then declined by 69 percent to about 4 in 2008. The average number of bids submitted per project from 2002 to 2005 ranged from 3.5 to approximately 4, increased to 5 bids per project in 2006, then decreased by 38 percent, to approximately 3 bids per project in 2008. Although the declines in contractor participation can be attributed to many factors, we found that project costs partly explained the declines. In statistical analyses, we found that State’s estimated NEC project cost is a strong predictor of the actual number of firms that prequalify to bid on projects, such that higher estimated costs result in fewer prequalifying firms and lower estimated costs result in more prequalifying firms. We also found that the actual number of prequalifying firms per project showed a strong positive correlation with the number of bids submitted per project. Thus, estimated project costs directly affect the number of prequalifying firms and indirectly affect the number of bids submitted. To illustrate these relationships, we compared the annual average estimated costs for NECs with the annual average numbers of prequalifying firms and bids submitted. As noted previously, State reported that NEC costs have more than doubled from 2001 to 2008. Although there were yearly variations between 2002 and 2005 in the average estimated costs for NECs and the average numbers of prequalifying firms and bids submitted, the changes during these years were not large. However, from 2005 to 2006, the average estimated costs for NECs declined by 28 percent from $69 million to $50 million. Because the financial criteria in 2006 were lower than in 2005, it was easier for firms to demonstrate the capacity to meet those requirements. As a result, the number of prequalifying firms per project rose from approximately 7 in 2005 to more than 13 in 2006, and the number of bids per project increased from 3.7 to 5. However, from 2006 to 2008, the average estimated NEC project cost more than doubled, rising to approximately $110 million per project. This increase made it more difficult for firms to meet the financial requirements to bid for and win NEC awards. As a result, fewer firms prequalified for and bid on NEC projects in those years. The profitability of NEC projects for contractors and State’s overall management of the NEC program may also have affected contractor participation, particularly in recent years. For example, the decline in the prequalification rate also reflects five firms, which, combined, built a total of 27 embassies, consulates, and annexes for a total value of $1.63 billion, withdrawing from the NEC program from 2006 to 2008. Although each of these five firms prequalified to bid on NECs in 2005, none of them chose to prequalify for 2008 projects, with one company withdrawing in 2006, two in 2007, and the remaining two in 2008. Officials from these companies cited insufficient profits and disagreements with State’s management of the program as factors contributing to decisions to withdraw. However, three of the firms indicated they would consider participating in future years but would base such decisions on the resolution of outstanding issues with current and past contracts and State’s willingness to reform its management practices. State has conducted no systematic analyses in support of its proposed amendment to the Omnibus Diplomatic Security and Antiterrorism Act of 1986, including whether such legislative changes are needed to maintain an adequate contractor base or how such changes would affect the program. Although State asserts that the declining contractor base has created a less competitive and less cost-effective program, the department reported no systematic efforts to analyze the relationship between competition for NEC contracts and actual contract awards. State officials did report that from 1999-2008, the department received at least two bids—the legislatively defined minimum number for adequate competition—for all but one NEC project solicited as an open competition. However, they did not comment on whether this minimum standard was sufficient to receive optimal prices for the government. In support of its initial legislative proposal, in October 2007, State identified several factors that it believed discouraged contractors from participating in the program, including: (1) working with State was not as profitable as working with private companies or with other federal agencies, (2) the challenging and sometimes dangerous locations of NEC projects, (3) the high cost of skilled American workers with security clearances, (4) dissatisfaction with firm fixed-price contracts for NECs, and (5) the relatively abundant domestic construction market. However, State did not provide any detailed analyses in support of these conclusions. State’s initial legislative proposal indicates that the number of U.S. companies capable of meeting the current requirements to qualify for NEC awards is nearing capacity. However, State has not systematically analyzed the extent to which the U.S. contractor community can meet these requirements. Therefore, we reviewed the degree to which some of the largest U.S. construction companies have participated in the NEC program. We compared the list of the top 100 U.S. design-build firms for 2008 compiled by Engineering News Record with the list of firms that have either prequalified for or won NEC awards since 2002. The ranking is based on companies’ total 2007 revenues from design-build contracts where the projects were designed and constructed by employees of the company in whole or in joint-venture partnership with other firms and subcontractors. The total revenues for these firms ranged from $104 million to $11.2 billion. We found that only 14 of the top 100 companies prequalified for NECs in at least one year from 2002-2008, and only 7 won at least one NEC award. In addition, only 3 of the top 100 companies prequalified for 2008 NEC projects—B.L. Harbert International, LLC; Caddell Construction Co. Inc.; and Weston Solutions Inc. While not all of the 100 companies may be interested in pursuing overseas construction, some firms not currently engaged in the NEC program are capable of working in overseas locations. For example, the top 100 list shows that 34 of the 100 firms derived income from overseas construction contracts. Ten of these 34 firms prequalified to bid on at least one occasion from 2002-2008, and two of these 34 firms prequalified to bid for 2008 NEC projects. In addition, we examined company Web sites and conducted Lexis-Nexis searches to determine the extent to which companies listed among the top 100 design-builders for 2008 and that have never won NEC awards have experience in countries where State plans to build NECs in 2009. We found that at least 16 of the 93 companies that have not received NEC contracts under the current program have ongoing operations in eight of the nine locations planned for 2009 (see table 3). However, none of those 16 companies prequalified to bid on State’s 2008 projects, and only two of those companies prequalified for projects in past years. A greater reliance on foreign firms, as specified in State’s 2007 legislative proposal, could increase security risks for NECs. Currently, foreign companies may not bid on projects that involve technical security unless it involves only low-level technology. State’s initial legislative proposal would provide the Secretary with discretion to waive the preference for U.S. contractors so long as the Secretary determined that it is more economical or efficient to do so and that the security of the project would not be compromised in doing so. However, State has not yet reported how it would ensure project security would not be compromised, including providing a clear explanation of how the controlled access areas would be securely constructed and identifying the additional safeguards needed to oversee construction. Finally, amending the requirements to allow greater access to small U.S. companies and foreign companies could also affect construction management on site. However, because small firms may not have the technical capacity to construct all facets of NECs and, because foreign firms cannot currently construct controlled access areas of embassies and consulates, it is unclear how construction of highly technical areas would be accomplished. Although State has not yet determined how to resolve these items, it could choose to award multiple contracts to complete targeted areas of work. State has taken an approach somewhat similar to this for some NEC contracts, to date, by awarding small projects, primarily annexes, to small U.S. and foreign construction firms, which sometimes proceed simultaneously with a larger NEC project previously awarded to other companies. State has also awarded separate contracts to construct unclassified and classified areas of some NECs, such as for the Baghdad, Iraq, and Suva, Fiji, embassies. OBO officials noted that this multiple contracting is inefficient and leads to frequent conflicts between contractors over precedence of work. Relying on small contractors in a similar approach, with multiple contracts, to complete a typical NEC project may multiply these problems, and State has not yet reported how it would mitigate this concern. In December 2008, State informed us that it has drafted a revised legislative proposal to allow for more U.S. firms to qualify as U.S. persons, noting that all U.S. companies that can meet the specified security requirements should be permitted to bid for and win NEC contracts. In addition, State said that it would no longer pursue greater access to NEC contracts for foreign firms. State’s revised proposal would, in effect, open competition for NEC awards to smaller U.S. firms. However, according to State officials, the projects planned through the remainder of the program are expected to be more complex and more costly, in general, than the projects awarded to date. Given that State’s experience with multiple contractors working independently at a construction site has not worked well, it is unclear how State could increase smaller firms’ participation without significantly increasing the government’s risk. However, as of the date of its comments, State had conducted no analyses in support of its proposal, including on the benefits and risks of a greater reliance on smaller firms. U.S. contractors we interviewed ranked financial incentives as the most important factor in determining their participation in the construction program; however, many contractors told us they were not making as much profit as anticipated. Once participating in the program, all contractors reported encountering significant challenges, such as the logistics of getting labor and materials to a construction site, meeting State’s construction schedules, coping with currency fluctuations and price increases, finding skilled American workers with security clearances, and handling relations with foreign governments. In addition, a majority of contractors favored using the combination of design-build delivery and the standard embassy design, and stated that neither firm fixed-price contracting nor the domestic and international construction markets affect their participation in the program. Most contractors also expressed concerns about State’s on-site project directors, the implementation of the design-build process, and the project guidance provided by State. Most of the 17 contractors we interviewed cited most often the potential to make money, the expectation that State would be a reliable customer, and the steady continuity of State’s building projects, even during difficult economic times, as the top three incentives to participate in the program (15 of the 17 contractors for each incentive). The desire to serve the United States and the prestige of building for the United States were also cited as strong incentives for some contractors (see table 4). In spite of the importance of reliably earning money as an incentive for program participation, many contractors said that making a profit had become difficult under the NEC program. The contractors defined profit as the monetary returns received after all charges have been paid, including regular salaries. Ten of 14, or 71 percent of the contractors also said that, in general, State projects were less profitable than their other construction projects. Specifically, contractors told us that 22 of the 53 total contracts they completed lost money, and two more did not earn a profit; they expected to lose money or break even on 11 of the 26 projects that were being built at the time of our fieldwork. In all, 13 of the 17 contractors, or more than 76 percent, reported they lost money or expected to lose money on at least one contract. Some contractors noted, however, that depending on the resolution of open requests for contract modifications—also called requests for equitable adjustment (REA)— some of the projects that lost money or broke even could show a profit. Although contractors have potentially meaningful incentives to participate in the program, they each reported facing significant challenges once in the program and when building the facilities. Contractors ranked the greatest challenges as (1) the logistics of getting labor and material to the construction sites, which are often in very remote locations; (2) meeting State’s construction schedules; (3) financial considerations, such as managing currency fluctuations; (4) labor issues, such as finding qualified workers with security clearances (cleared workers); and (5) relations with foreign governments (see table 5). These challenges reflect contractors’ comparatively greater risk when constructing facilities for State, compared to other clients. Thirteen of 16 contractors, or over 80 percent, said that their firms’ profits from the NEC program have not been commensurate with the risks involved. Twelve of 17 contractors said handling the logistics of getting labor and materials to the construction site was a major challenge while four said it was a moderate challenge. Many of the construction sites are in relatively remote locations and are difficult to access from the United States. However, despite logistics being cited by contractors as a challenge and, in many cases, a consideration to bid for specific projects, none of the contractors cited it as a determining factor when considering whether to participate in State’s construction program. Contractors did not report project locations as a disincentive to participate in the program. On the contrary, the challenge of building to high standards in often difficult environments was cited by 12 of the 17 contractors as an incentive for participating in the program. Contractors did confirm that location can be a factor in deciding to bid on specific projects, but it also was a consideration generally for the purpose of assessing the competition for projects. For example, a company may avoid bidding on projects in locations where it believes another company has a clear competitive advantage, such as by already being mobilized in the country or having extensive experience in a given region. Fourteen of the 17 contractors viewed meeting State’s construction schedules for new embassies as a major challenge. The two contractors who rated meeting the schedules as a moderate challenge, and the one contractor who said the schedules were a minor challenge had not yet completed a building project for State. Even a successful contractor, whose entire business model is built around meeting State’s schedules, said they are a major challenge. Contractors described the building schedules as unrealistic, a “problem,” “absolutely insane,” “warped,” and “ridiculous.” Some of the contractors stated that completing the design of the facilities, together with building the facilities took more time than was allowed by State. The goal of the NEC program is to get U.S. government employees overseas out of hazardous, insecure buildings and into safe and secure buildings as quickly as possible. From 2002 to 2007, State aggressively shortened the time allowed to complete the buildings. The contractors raised concerns that State reviews designs in greater detail and later in the process than is typical for design-build construction. Nearly all the contractors said that they were challenged to meet State’s shortened project schedules, considering, among other factors, the difficulty of producing an approved design that will enable State to provide the necessary security certification to Congress. According to these contractors, designs were often certified for construction significantly later than planned due to complex and extensive project requirements, the application and delivery of which had to be validated through State’s design reviews. According to several contractors, much of the allowed construction time is spent obtaining approval of the completed design, leaving less time for the actual construction of the facility, and increasing the contractors’ risk of not meeting project completion dates. A few contractors said that if they were building in cities of the high-income countries in the world, they could more reliably meet the schedules. However, most of the NEC locations are in lower-middle and lower- income countries where finishing a design acceptable to State, getting materials and equipment to remote locations, and actually building the structure may take more time than State allows. If anything goes wrong, according to contractors, they are likely to miss the deadlines. Financial considerations, including currency fluctuations, rising costs for construction materials, and the need to obtain performance bonds to fulfill U.S. government requirements provide another set of challenges to contractors. As previously noted, in September 2008, State reported that the price increases for construction materials and the weakening of the dollar more than doubled NEC construction costs since 2001. A few contractors referred specifically to rising costs for construction materials as a concern. Contractors reported on strategies to mitigate inflation, such as factoring inflation into their contract proposals or purchasing materials in advance. Moreover, given that labor and materials procured overseas generally must be paid in local currencies, and that the dollar has weakened against many other world currencies, managing currency fluctuations has become a significant challenge, according to contractors with whom we spoke. As with inflation, contractors regularly manage this risk by including a contingency for potential dollar devaluation in their bids. Several contractors also seek protection from currency fluctuations by purchasing exchange rate futures to lock in a rate. However, these measures can not fully ease the effect of wider-than-expected currency swings. Ten contractors told us that currency fluctuations are a determining factor in their decision to compete for State building contracts. Five others said that currency fluctuations had not been a factor that determined whether or not to compete for a given contract in the past. However, currency fluctuations could become a factor in the future, given the relative strength or weakness of the U.S. dollar. Obtaining performance bonds was seen as either a major or moderate challenge by 9 of the 17 contractors we interviewed, and its importance may be growing. Factors determining whether a contractor needs performance bonding include the contractor’s revenues and State’s experience with the contractor. Larger contractors, in general, can more easily obtain a performance bond than smaller contractors. Also, there have been instances where State has waived the need for a performance bond for contractors with whom it has extensive, successful experience, according to the contractors. As of the date of this report, no bonding company has ever had to assume responsibility for a contractor’s failure to perform on an NEC project. Nonetheless, smaller contractors told us about problems obtaining performance bonds for State contracts, and State told us that at least one bonding company had begun refusing to provide performance bonds to State contractors. If State succeeds in changing the law to allow smaller contractors to prequalify for competition, the availability of performance bonds could become a more prevalent concern. Labor issues, in general, were rated high on the list of challenges. Contractors said that finding and keeping workers willing to work overseas poses a challenge. In particular, contractors explained that “cleared” workers —those with security clearances—who are willing to live overseas in often unappealing locations are in relatively short supply. Moreover, because of the low supply and high demand, these cleared workers command the labor market. For example, several contractors complained that cleared workers will frequently move to another contractor for a higher salary or a more appealing location, even if their current project is not finished. To complete work, contractors must sometimes match or exceed competing offers from other contractors to keep the cleared workers on site. Contractors also rated finding and retaining workers who do not have clearances as a challenge, though not as critical a challenge as finding and retaining cleared workers. Contractors also cited problems dealing with foreign governments as a challenge. Understanding and dealing with issues related to obtaining building permits, clearing materials through customs, or paying tariffs on imported goods, as well as obtaining reliable information about the local country, are challenges and risks of building overseas. However, contractors generally accept these challenges and a majority of contractors said they believe that State could provide more helpful information about the locality. For more challenges faced by contractors, see appendix II. We asked contractors to characterize State’s approach to the design-build delivery of NECs, using the standard embassy design and firm fixed-price contracts, in terms of effectiveness and economy. With many caveats, 11 contractors favored the combination of design-build delivery and the standard embassy design as a good method for building new embassies. Although contractors cited problems with aspects of the design-build delivery method and SED, they generally expressed support for both. Two contractors stated that bidding on the completed NEC design would improve the accuracy of bids and allow contractors to better predict how long building would take. Having contractors bid on a completed design would be essentially using a design-bid-build process, a delivery method that separates design and construction activities into two distinct contracts. The majority of contractors did not raise concerns about firm fixed-price contracts, and only one contractor reported not bidding on one occasion because of the type of contract. Neither a relatively robust domestic construction market nor an active international construction market were cited as factors causing contractors to leave the program. We asked contractors how much the activity level of the construction industry in the United States affects firms’ decisions to compete for State building projects. Ten of the 17 contractors said the domestic construction market had some or no effect on their decisions to participate in overseas construction, in general, or compete for State projects, specifically. A few contractors added that their firms were either primarily international or that they worked in the international division of their firms and that they would be in the international market regularly, regardless of domestic market conditions. Some of the contractors agreed that the domestic market affected their decisions to bid for State projects but only because important resources, such as performance bonding capacity or staff, were already allocated to domestic projects and, therefore, not available for competition in the international market. Thus, it was these firms’ current commitments for domestic-based work, rather than the U.S. construction market in general, that influenced their firms’ decisions to bid on State projects. We also asked contractors how much the activity level of the construction industry overseas affects firms’ decisions to compete for State building projects. In this case, 13 of the 17 contractors said the international construction market had some or no effect on their decisions to compete for State projects. For a few contractors, State projects are their preference in overseas work. Fourteen contractors characterized State’s management of its embassy construction program as fair or poor. Several management practices adopted after the 1998 bombings may have contributed to problems cited by the contractors, including (1) strengthening the role of the project director and limiting access to State management by contractors, (2) the design-build project delivery method as implemented by State, and (3) unclear project guidance within various documents that detail construction requirements. Moreover, contractors reported these practices inhibit their ability to complete projects on time and with a profit. Beginning in 2001, State took measures to limit partnering with contractors as it had existed, including strengthening the role of the on- site project directors. However, the action may have had unanticipated effects on the NEC program. A few long-standing contractors reported that the customer-client atmosphere at State changed and that distrust between contractors and State’s staff, particularly project directors, frequently resulted in adversarial relationships. Overall, 10 of the 17 contractors we interviewed rated State as a poor or fair business partner— 6 rated State as poor, 4 as fair. In addition, 4 of the 7 contractors who rated State as a good or excellent business partner had not completed a construction project as of the dates of their interviews. Project directors are the targets of many contractors’ concerns about the State process. Most contractors we spoke with said that, because of the project director’s role in providing information to and from Washington and making or, at least, conveying project execution decisions, project success is greatly dependent on the project directors. The contractors provided mixed views on their levels of satisfaction with individual project directors. Contractors also expressed concerns about the professional qualifications of project directors and their experience managing construction and said they would like project directors to have significant construction experience. In discussing relationships with project directors, two contractors noted they will avoid bidding on projects they know will be headed by a particular project director with whom they or other contractors have had past troubles. We asked contractors a number of questions regarding their experiences with various State bureaus and offices. Contractors were asked to what extent they had experienced project delays because of various State officers and entities, and they responded that project directors are the greatest source of delays. Contractors also rated the State project directors on the timeliness of their decisions in a variety of areas and on the level of authority that project directors currently have for making certain types of decisions. A majority of contractors reported that project directors are generally timely in responding to requests for information, and contractors were about evenly split on whether project directors are timely in providing answers to work approvals and general decision making. However, for timeliness on contract modifications or REAs, project directors were perceived by the majority of contractors to be only sometimes, rarely, or never timely. Even as project directors are the targets of many contractors’ concerns, they often have no authority to make decisions in specific areas cited by contractors. Contractors rated project directors’ decisions and authorities for a number of types of contract modifications. For example, a majority of the contractors, typically 10 to 12, rated project directors’ decisions as fair or poor in areas such as modifications exceeding $25,000, technical changes, and changes that require more time. In fact, project directors do not have the authority to make decisions on changes above $25,000 for any single modification, on accepting technical changes, or on providing more time, as each of these decisions must be made in Washington. However, according to what contractors told us, as many of them were satisfied as were not with the authority given project directors on changes above $100,000, material substitutions, or accepting technical changes. State officials said that, under the former director’s policy of limiting contractors’ access to various offices at State, all requests had to be communicated to the project director. The project director would then either take individual action, or seek assistance or approvals from Washington and, subsequently, deliver and enforce decisions made by others. As a result, it appears that project directors, rightly or wrongly, bear the brunt of contractors’ concerns and disapproval of decisions that negatively affect contractors. A majority of the 17 contractors also said that State’s implementation of the design-build process is flawed, and some said that the time required to complete design and design reviews significantly affects the project delivery schedule. Eleven contractors had favorable views of the design- build process in general because it is supposed to erect buildings more quickly, and some indicated the method can result in lower construction costs. In addition, as previously noted, contractors thought design-build delivery worked well with the standard embassy design. However, during our interviews, contractors offered the following concerns about State’s implementation of the design-build method: State’s protracted design phase is lengthier than that of their other government clients (four contractors); State becomes too heavily involved in the project design (three State’s design review comments—which range from 500 to 1,000 comments per project, each of which must be addressed—are excessive (four contractors); Some contractors feel unable to proceed with construction until they have received a fully approved, 100 percent design from State (six contractors); and Contractors do not have sufficient time to actually build once State has finally approved a design, given the time limits on completing the projects (four contractors). In addition, 13 contractors expressed concerns about unclear and contradictory guidance and information within and among critical components of State solicitation and design documentation. Eight contractors reported a number of problems with the RFPs, including sections where information and requirements were unclear, inconsistent, or in conflict with other sections and, in some cases, incorrect. Five contractors cited examples of poor project documentation, including inaccurate space plans, and incomplete information provided on existing site conditions related to local utility service layouts and soil conditions. In addition, 2 of the 11 contractors said State’s answers to contractors’ technical questions about specific RFPs were not incorporated as amendments to the solicitations, even though those answers were considered binding. Finally, contractors told us that guidance often conflicts with actual practice. Most contractors raised specific complaints about being unable to substitute local materials for U.S. or U.S.-standard materials. Although the RFP states that local materials may be substituted for U.S. materials; however, in practice, this occurs only after State has approved the specific substitution, based on the contractor’s documenting that the substitute meets U.S. standards. According to what 13 contractors told us and what we have reported in the past, obtaining approval to use substitute material is difficult. Six of these 13 contractors told us that the process for obtaining approval is too onerous and time-consuming to be worth the effort—for example, one said that money saved through using local materials is essentially lost by the time spent getting the approval. In a specific example of guidance on another issue conflicting with practice, according to contractors, the SED allows contractors to install either a wedge barrier or a sliding gate at the vehicle entrance. (See fig. 2 for an illustration of a wedge barrier.) Although State prefers to have the wedge barrier, contractors prefer the sliding gates because they are less expensive. However, State routinely overrules this choice and requires wedge barriers, even though the sliding gate meets the requirement. State officials reported they are attempting to reconcile the guidance and practice on vehicle barriers. In recent months, State has reached out to the contractor community in an effort to repair strained relationships and to encourage contractors’ continued participation in the NEC program. To support improved relationships with contractors, State has implemented, or is in the process of implementing, several procedural changes to increase the effectiveness of its project delivery and contract management processes and to mitigate project risks. In addition, State has created a new project management group within OBO to improve internal coordination and communication and enhance its accountability to contractors and all other project stakeholders. State has taken steps to reach out to the contractor community to improve relationships. In February 2008, for example, State officials met with the president of the Associated General Contractors of America (AGC), along with a group of five contractors who had completed NEC projects to discuss specific concerns of the industry. The discussions sought to identify reasons for contractors ending their participation in the NEC program and covered several industry concerns with technical and administrative aspects of State’s contractor prequalification, contract procurement, and project management practices. At the conclusion of the meeting, the parties identified several follow-up items and agreed to hold future task force meetings to discuss the issues. In another outreach effort, State reported it intends to examine partnering concepts and to consider the extent to which they may be reintroduced to future contracts. While State discussed its intent to examine partnering in September 2008, it has not yet drafted guidance or policy on how partnering would be reintroduced into its processes in general or applied to specific projects. Prior to 2001, State had used partnering on some contracts and found that it generally contributed to project success. State’s use of partnering agreements on these contracts helped facilitate the government and contractors working together as a cohesive team to complete projects on time and in accordance with State requirements, while providing contractors opportunity to earn a fair profit. In particular, partnering agreements were used to ensure such outcomes as timely decisions and the resolution of problems at the lowest level possible. As previously discussed, OBO’s Director eliminated the formal use of partnering in 2001, in part because he thought contractors had taken advantage of partnering to gain access to OBO’s upper management, which served to bypass the project directors and undermined their ability to effectively manage projects. During OBO’s September 2008 Industry Advisory Panel meeting, OBO and AGC began a preliminary discussion on partnering and how its principles could be incorporated into contracts and used to foster better collaboration between State and contractors on current projects. At the conclusion of the discussion, OBO’s Director acknowledged that State needed to do more work and obtain a better understanding of how partnering could be applied in contracts. To respond to contractors’ concerns identified through its outreach efforts, State has implemented, or is in the process of implementing, several procedural changes to increase the effectiveness of its project delivery and contract management processes and to mitigate project risks. The changes being made by State—which are influenced by recommendations of an internal working group that was established in July 2008 to review State’s capital project acquisition process—include improving design-build project delivery, streamlining RFPs and staggering their issue dates so contractors have more time to respond to each solicitation, developing a database of non-U.S. materials that meet project being more responsive to contractors’ requests for equitable adjustments. OBO began implementing some of these changes in its fiscal year 2008 NEC program. Other changes are ongoing, and improvements will not be achieved until fiscal year 2009 and later. At the recommendation of its internal working group and after discussions with its industry advisors, State intends to modify its approach to design- build project delivery. Because of security concerns, State requires that its projects pass a rigorous design review prior to being certified for construction. Under State’s former approach to design-build delivery, contractors needed to complete design, respond to review comments— which typically numbered several hundred—and await State’s certification of the design for construction. As previously discussed, some contractors said that they expended comparatively more time completing a design for State and having it certified for construction than on a design for other owners’ projects, which precluded them from beginning construction as early as they wanted and prevented State from fully realizing the time- saving potential of design-build delivery. In its revised approach, State will use the “bridging” design method to provide more focused design detail to construction contractors. By providing more design detail up front, State expects to more effectively translate project requirements to contractors, speed the design certification process, and enable construction to begin sooner. Under the bridging method, State would first contract with a design firm—referred to as either the bridging architect, criteria architect, or the owner’s design consultant—to develop an initial design that incorporates critical requirements and that can be certified for construction. State would then contract with a design-build contractor to complete the design for the project—which should take less time than it did under State’s former process because more up-front design work will have been completed— and carry out its construction. According to industry experts, the advantage to this approach is that an owner, in this case, State, can initiate design sooner and ensure critical requirements are incorporated into a bridging design. Moreover, because the bridging architect will have developed the project to the point of being ready for construction certification, and because contractors can begin construction activities shortly following contract award, rather than having to wait for State to certify the project for construction, design-build contract durations can be shorter. In addition, industry experts indicate that an owner may potentially receive a better price for design-build services by using this method. Because there would be fewer unknowns regarding the owner’s intent as a result of requirements being more clearly delineated in the bridging design, contractors’ proposals should contain fewer allowances for uncertainties. Prior to implementing this approach to its fullest effect, OBO must reach an agreement with the Bureau of Diplomatic Security on how this bridging design approach would address the security requirements associated with NEC facilities and construction—such as building setback, Forced Entry/Ballistic Resistant (FE/BR) requirements, and technical security systems, among others—and whether the bridging designs would be certified to Congress as meeting all security requirements. Moreover, State’s working group noted that State must also ensure that the portions of the design that are certified as meeting all security requirements are contractually binding and preserved through the continuation of design and completion of construction by the design-build contractor. Keeping the bridging architect involved with the project through the design-build phase, for example, is one option that State may consider using to ensure the integrity of security features, upon which certifications made to Congress are preserved during the final design and construction by the design-build contractor. Starting with the fiscal year 2008 contract awards, State is generally extending the time frame within which projects must be built. Instead of basing a project’s schedule on its SED size classification, State’s new approach will set the schedule based on a variety of factors. In particular, this approach will draw upon recent experience from completed projects of similar scope and size, as well as project-specific considerations such as geographic location and host country conditions to tailor schedules for new projects. As a result of having more time to complete projects, contractors are more likely to meet contract completion dates and will bear less risk of having to pay liquidated damages for delayed completion. At the recommendation of its internal working group, State is examining options to streamline its RFPs to better integrate requirements and convey information to contractors that respond to them. A typical RFP consists of over 6,000 pages and contains elements such as the Space Requirements Program, which details square footage space needs of planned occupants, and test-fit drawings, which provide a notional layout of floor space. Because of their sheer size, RFP documents are difficult to maintain and often contain conflicting information that can inhibit contractors’ understanding of requirements and increase project risks. For example, the Space Requirements Program and the blocking and stacking documents—the latter providing a notional vertical stacking and floor-by- floor layout of office suites—provided in the RFP for the Managua NEC project misrepresented the actual size of the building. As a result of this discrepancy, State settled with the contractor on a $4.3 million modification that included a 165-day time extension. The working group also recommended that State establish a single RFP coordinating entity to maintain a “model RFP,” with appropriate document change control mechanisms from which project-specific RFPs would be developed. Individual model RFPs specific to certain project types and delivery methods may also be developed. In addition, State intends to leverage technology by using automated applications to consolidate, update, and maintain its RFP documents—creating what it terms an “e-RFP.” State officials believe that the majority of improvements may not be seen until State’s fiscal year 2010 RFPs are issued because implementation will require enhancements to State’s information technology processes and applications. In the longer term, State intends to explore ways to make greater use of evolving Building Information Modeling (BIM) technologies—to include the migration of all RFP design and criteria data into a structure conducive to these technologies—to allow for more integration and exchange of project-specific information between State and contractors throughout every stage of the project. In addition, State established a goal to stagger the RFPs for its fiscal year 2008 projects so that contractors would have more time to respond to each solicitation. For its fiscal year 2008 projects, State staggered RFP issuance between May and July. In addition, when contractors asked for additional time, State generally granted their request by providing a 10-day extension to the standard 45-day response time. State is working to develop a database of acceptable materials from foreign sources that contractors could use in construction. We first reported on this effort in 2006. State construction contracts require contractors to use U.S. materials and products unless contractors can demonstrate the proposed substitute meets U.S. performance standards. Benefits to contractors of using materials available within the country include, for example, reduced shipping costs. However, to maintain schedule, contractors must obtain a timely approval from State to use and to procure materials within that schedule. A few contractors we spoke with note that State neither consistently approves the use of substitutes, nor consistently provides timely decisions, even in cases where certain products or materials have been approved for use on another State project. While State reports it is continuing with its efforts to develop such a database, contractors have yet to see evidence that State’s approach to approving substitutions is more efficient and timely. In 2006, State issued policy and procedures for processing a contractor’s request for equitable adjustments (REA). State reports it has recently implemented a new centralized system of receiving, processing, and tracking all REAs at OBO headquarters. REAs are visible to senior managers—REA status is reviewed monthly at Program Performance Review meetings—who hold project directors accountable for providing a timely response. Under this process, State seeks to receive, assess the merits of, and respond to contractors’ REAs within 55 days. However, if State requires additional information following the receipt of an REA, the time needed for the ensuing information exchange and related discussions may affect State’s ability to achieve final resolution within 55 days. Given that State may, in certain instances, be unable to address an REA within 55 days, some contractors with whom we spoke said that State still takes too long in responding to REAs. In addition, one contractor indicated that State purposely defers its decisions on REAs so that it can use them as leverage in future negotiations. For example, State might negotiate waiving or reducing liquidated damages that it could assess for a contractor’s late completion in exchange for the contractor withdrawing its REA. While we did not examine the merits of these allegations, contractors’ concerns suggest that State’s continued attention to REA management is needed. In September 2008, State created a dedicated project management group responsible for providing coordination and oversight from planning through construction and commissioning. State initiated this effort based on a recommendation of its internal working group that was based, in part, on State’s Office of the Inspector General’s finding that OBO’s former organizational structure—in which project management responsibility passed sequentially from a planning office to an executing office— allowed for marginally effective coordination, communication, and accountability. Under the former organizational structure, project executives responsible for construction and commissioning were not heavily involved in planning efforts conducted by planning managers. Similarly, planning managers who typically spent at least a year developing project requirements prior to the contract award were normally not involved in design and construction oversight efforts managed by project executives. As a result, no one in OBO maintained comprehensive knowledge of a project from start to finish, which may have contributed to accountability gaps when a project passed from one office to the next. The new project management group, the Project Development and Coordination (PDC) Division, resides within the Office of Program Development, Coordination, and Support—formerly called the Office of Project Execution. Project managers in the PDC Division—who will be required to obtain project management certification, in accordance with Office of Management and Budget (OMB) requirements—will lead a multidisciplinary team of subject matter experts in performing project management functions. During the project’s planning and design phases, project managers will be responsible for efforts such as developing the RFP, overseeing procurement of the design bridging contract, chairing design review meetings, and approving design changes. During the construction phase, project managers will coordinate with construction executives from the Office of Construction, Commissioning, and Maintenance to support on-site project directors in administering NEC construction contracts. The working group noted that during construction the three individuals—project director, project manager, and construction executive—must have clearly defined roles that are properly coordinated to avoid confusion and to ensure that resources are being used efficiently. Under the new organizational structure, the project director is State’s on- site representative who routinely interfaces with the contractor and serves in the key role of contracting officer’s representative. The construction executive is the Washington-based focal point for all communications from the project director and performs key functions, such as serving as the alternate COR, and processing invoices and project change requests. At the same time, the project manager serves as leader of the Washington- based team and is responsible for tasks such as leading integrated design reviews; managing contract documents; and, in conjunction with the construction executive, reporting to senior management on project performance at monthly review meetings. However, because the organizational change was only recently implemented, it is too early to determine whether it enables project directors, project managers, and construction executives to effectively coordinate efforts and optimize project management efficiencies. From 1999 to December 2008, State constructed 64 new embassies, consulates, and annexes and relocated more than 19,500 U.S. government employees to safe, secure, and functional state-of-the-art office buildings. In 2007, State concluded that “the current pool of American contractors qualified and able to carry out diplomatic construction projects overseas has nearly reached its capacity.” In 2007, State proposed amendments to the Omnibus Diplomatic Security and Antiterrorism Act of 1986 that would allow smaller U.S. companies and foreign companies to compete for projects for which they currently would not qualify. Congress did not act on the proposed amendment. In December 2008, State indicated it would modify that proposal to extend greater opportunities only to U.S. construction firms that currently cannot meet the U.S. persons definition. However, State has completed no systematic analysis in support of its conclusion and legislative proposal, including assessments of the significance and cause of changes in contractor willingness to participate in the NEC program, how these changes have affected its construction program, how its proposed amendments would address those causes and effects, the risks associated with its proposed amendments, or how it would mitigate those risks. In addition, State has not assessed the extent to which companies comprising the U.S. construction sector are capable of meeting the current criteria. Absent such support, it is unclear how State’s proposed amendment would affect the NEC program. In our analysis, we found that contractor participation declined in recent years for two reasons. First, increasing construction costs have made it more difficult for some firms to qualify for awards. Second, contractors reported that State management and construction processes undermine their ability to turn a profit, which is their primary incentive for participating in the program. State has recently implemented a number of changes to its management of the NEC program by improving communications with the contractor community, refining some of its management practices by implementing process reforms and mitigating some of the risks associated with NEC projects, and reorganizing its management structure. These efforts are designed to improve State’s overall management of the NEC program, including increasing the number of firms willing to participate in the program, and they address some of the important factors contractors reported as affecting their decisions to participate in the NEC program. While these changes may increase contractor participation, their full effects on the NEC construction process may not be apparent for a number of years, and State will need to monitor their effectiveness. We recommend that the Secretary of State conduct a systematic review of the embassy construction contractor base that (1) demonstrates whether the U.S. contractor base that is both capable of meeting current requirements and willing to participate in the NEC program is adequate; (2) estimates the expected benefits and identifies the potential risks associated with the legislative proposal; and (3) details how the risks would be mitigated. In written comments on a draft of this report, State said that although the contractor base has been adequate in the fact that it has met the legislatively specified minimum level of competition, the program could benefit from expanding competition. Additionally, State said it would revise its proposed amendment to the Omnibus Diplomatic Security and Antiterrorism Act of 1986 by opening competition for NECs only to U.S. companies that meet the specified security requirements for a project, rather than requiring them to meet the current statutory definition of a U.S. person. State also said that since full and open competition is a central principle for federal acquisitions, a cost-benefit analysis is unnecessary. We disagree with State’s view. State initiated a process to revise the qualifying criteria for NEC awards, but it has provided no compelling analytical support for why the criteria should be amended, how such an amendment would be implemented, the expected benefits and potential risks associated with the changes, or how any identified risks would be mitigated. Absent such support, it is unclear how the proposed changes would affect State’s program. We, therefore, believe our recommendation remains valid. In a draft of this report, we had a second recommendation that State assess how its efforts to improve communication with contractors, implement process reforms and mitigate project risks, and reorganize its organizational structure affect contractor participation. In a December 2008 meeting with State officials and in State’s written comments, the department noted that it would continue to actively engage with contractors and assess its performance. However, State also noted that it may take a number of contract cycles for its recent outreach efforts and procedural and organizational reforms to achieve their full impact. We agree that it may take time before the overall effectiveness of State’s recent efforts can be fully assessed. Therefore, we decided to delete the recommendation. State’s comments, along with our responses to specific points, are reprinted in appendix III. State also provided technical comments, which were incorporated into the report, as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, sending copies of this report to interested congressional committees and the Secretary of State. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact Jess T. Ford at (202) 512-4128 or [email protected], or Terrell G. Dorn at (202) 512-6923 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. To address the first objective—how the contractor participation in the NEC program changed in recent years—we assessed the number of firms that prequalified and the number of contract proposals (bids) submitted for each new embassy compound (NEC), new consular compound (NCC), and new office building (NOB) awarded from 2002 to 2008. Collectively, we refer to this class of projects as NECs. Data for prequalifying firms were derived from the Department of State’s (State) Office of Logistics Management and were cross-referenced with prequalification records derived from the Federal Business Opportunities Web site (http://www.fbo.gov). Data for the number of bids were also derived from State’s Office of Logistics Management and, to the extent possible, were corroborated with contract information and State analyses obtained during previous GAO work. In cases where discrepancies occurred between the two sources, or where we could not confirm the data, we used the data provided by State. Data for the number of prequalifying firms and number of bids submitted for all 10 NECs awarded from 1999 to 2001, as well as for three projects from 2002 onward, were unavailable; thus, they were excluded from the analyses. Non-NEC projects, including those labeled by State as interim office buildings (IOB), newly acquired buildings (NAB), new office annexes (NOX), and Standard Secure Mini Compounds (SSMC), were also excluded from the analyses. Table 2 shows the NEC projects included in our analyses, and the numbers of prequalifying firms and bids submitted for each NEC award (see page 16 of this report). We determined that these data on the numbers of prequalifying firms and bids received were sufficiently reliable for our purposes. To determine how contractor participation has changed over time, we tracked the variations in the yearly average number of firms per project that prequalified to bid for NECs and the yearly average number of bids submitted per NEC project. We also compared these averages with the yearly average estimated NEC project costs. Data for estimated costs derive from two sources. For fiscal years 2005 to 2008, the estimated costs for NECs derived from notices of solicitations for contractors to submit prequalification packages. In cases where a range was provided for the estimated cost, we used the maximum estimated value for our analysis. Estimated costs for 2002 to 2004 were calculated based on (1) a 2005 OBO analysis of variances between contractor bid prices and the government estimated prices for each NEC project and (2) the actual original value of the contract award. We determined that these cost data were sufficiently reliable for our purposes. We also developed two regression models to understand the factors that influence contractor participation. Each model was based on the individual NEC contracts (NEC, NOB, and NCC) that were awarded from 2002 to 2008 (see table 2 on page 16 of this report). The first model used fiscal year and estimated project cost as independent variables to predict the number of firms that prequalify to bid per NEC project. We found a statistically significant inverse relationship between the estimated NEC project cost and the number of prequalifying firms (coefficient estimate = -0.09, p-value = 0.000), such that higher estimated costs result in fewer prequalifying firms, and lower estimated costs result in more prequalifying firms. However, the relationship between fiscal year and the number of prequalifying firms was not statistically significant (coefficient estimate = 0.34, p-value = 0.108). In the second model, we used the fiscal year and the number of prequalifying firms for each NEC project as dependent variables to predict the number of bids received per NEC project. We found a statistically significant inverse relationship between fiscal year and the number of bids per project (coefficient estimate = -0.19, p-value = 0.020), such that the number of bids per NEC project declined significantly from 2002 to 2008. We also found that the number of prequalifying firms per project are significant predictors of the number of bids received (coefficient estimate = 0.18, p-value = 0.002), such that more bids are received when more firms prequalify to do so. In our modeling, we considered statistical significance to help specify the variables to include in our models. To address the second objective—the degree to which State has assessed the need for or potential outcomes of its proposed amendment to the Omnibus Diplomatic Security and Antiterrorism Act of 1986—we reviewed State documents on legal requirements to qualify for NEC awards, State’s proposed amendments to these legal requirements, and State’s contract solicitation and award processes. We also conducted interviews with State staff on the level of analyses State completed in support of the proposed amendment, the likely benefits that would be gained, how risks to the government would change, and how those risks would be mitigated. To test State’s assertion that sufficient capacity no longer exists among the U.S. contractor base to complete NEC awards, we compared the extent to which firms listed by Engineering News Record (ENR) in its compilation of the top 100 U.S. design-build firms for 2008 had prequalified for NEC awards from 2002 to 2008, and received awards from 2002 to 2008. The top 100 list is determined by ranking companies’ total 2007 revenues derived from design-build contracts where those construction projects are designed and built by its own workforce or in conjunction with joint- venture partners and subcontractors. We also searched Web sites of, and conducted Lexis-Nexis searches on, the top 100 companies to determine whether these firms have ongoing operations in countries where State’s Bureau of Overseas Buildings Operations (OBO) plans to build NECs in 2009, as listed in OBO’s Long-Range Overseas Buildings Plan, FY 2008- 2013. Underlying these analyses is our assumption that the companies on this list could meet at least the financial criteria, as outlined in the Omnibus Diplomatic Security and Antiterrorism Act of 1986, to qualify for NEC awards since (1) the 96th-ranked firm prequalified to bid for NEC awards in fiscal year 2008 that were in excess of its 2007 revenues and (2) at least four other firms not on the list—American International Contractors Inc. (Special Projects), Aurora LLC, Environmental Chemical Corporation International, and Framaco International—prequalified for the 2008 awards. We did not independently confirm the validity of ENR’s methodology for developing its top 100 ranking, nor did we independently verify the accuracy of information derived from the company Web sites or Lexis-Nexis searches. However, because our analysis was designed to illustrate a potential for untapped contractor capacity, we determined that the data we used were sufficiently reliable. To address the third objective—factors that affect contractors’ decisions to participate in State’s construction program—we identified the universe of 21 U.S. construction firms that won awards to build U.S. embassies, consulates, and diplomatic annexes since 2001. Foreign firms and U.S. firms awarded only contracts for construction other than office buildings, such as Marine Security Guard quarters, staff housing, and other construction projects were not included in our census. Three of the 21 U.S. firms were excluded from our interviews for various reasons—one company is no longer in business, while two others received sole-source contracts that would make them unable to respond to a number of the competitiveness questions in our interview instrument. A fourth company was excluded because we could not arrange a meeting with that company. As a result, we interviewed 17 U.S. contractors from March-June 2008 (see table 6). From 2001 to 2007, these 17 companies were awarded 78 NEC and related contracts with original values totaling approximately $4 billion. This latter value represents 81 percent of all embassy, consulate, and annex construction contracts awarded through 2007. To obtain consistent information from the contractors, we developed a structured interview instrument that included approximately 70 closed- and open-ended questions designed to assess contractor views and experiences on a wide range of construction-related topics, including, (1) construction experience and experience with State and other federal agencies; (2) State’s program-level and on-site construction management policies and processes; (3) incentives for pursuing construction awards; (4) challenges in completing NEC and related construction projects; and (5) profitability of NEC and related projects. To ensure that respondents understood the questions in the same way, that we had phrased the questions appropriately for this population, and that we had covered the most important issues, we pretested our instrument with three contractors and made revisions based on their input. Prior to fully implementing the instrument, it was reviewed by the staff from the U.S. Naval Facilities Engineering Command and State’s Office of the Inspector General. In addition, we briefed staff from OBO and State’s Office of Logistics Management on the instrument’s content, implementation schedule, and intended respondents. To address the fourth objective—actions State has taken to address the reported decline in contractors willing to participate in the NEC program—we reviewed documentation and conducted interviews with knowledgeable State officials on (1) rules and regulations outlining the embassy construction process, including public laws, Federal Acquisition Regulations, the Foreign Affairs Manual, and State reports and decision memos; (2) delivery methods and partnering policies employed by other federal agencies and supported by leading industry groups; (3) State’s efforts to improve communications with the contractor community, including meetings with industry groups and individual contractors; (4) State’s reorganization of planning offices, including the development of a new project management group and project manager positions; and (5) State efforts to improve construction processes, including lengthening project schedules, streamlining the contract solicitation process, and clarifying contract documents. We also attended State’s monthly program performance meetings, its quarterly Industry Advisory Panel meetings, and its annual Industry Day meeting, at which information about contract opportunities was presented to firms who attended the event. Finally, we reviewed past GAO work on embassy construction and met with and reviewed the report of a State Inspector General inspection team reviewing OBO operations. We conducted this performance audit from October 2007 to January 2009, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Contractors rated 24 challenges as “Major,” “Moderate,” “Minor,” or “Not a Challenge.” Then, they rank-ordered their major challenges. We created the challenge categories to facilitate analysis and discussion. Table 5 in the main text shows the rank-ordered major challenges. Table 7 displays how contractors rated all the challenges. The following are GAO’s comments on the Department of State’s letter dated December 31, 2008. 1. We have adjusted the title of this report to reflect State’s comments that although State has taken actions designed, in part, to increase contractor participation, the addition of more interested, responsible bidders for State projects would benefit the program. 2. State notes that it has recently increased the construction schedules for each class of SED, and streamlined the RFP process and reformed some design processes to help contractors better understand requirements and begin construction more quickly. State also said that contractors should be aware of the period of performance when preparing bids for State projects. We acknowledge in the body of the report that State has increased the time allotted to construct the various categories of SEDs. We agree with State that, based on the RFP documents, contractors should be aware of the planned construction time frame and the associated risk prior to submitting contract proposals. The report acknowledges State’s efforts to streamline RFP documents and improve the design process; however, it may take a number of years and completed projects to determine how the changes affect contractors’ ability to complete construction according to schedule. 3. State said that its methodology for estimating risk and salary increases is not arbitrary, but rather based on demonstrated financial criteria. Since we did not analyze how risks and salaries are estimated, we make no explicit or implicit statement on the validity of the methodology used. State also said that a solution to less competition for NEC awards could be to draw in larger U.S. firms that have the financial capacity for the work, in accordance with GAO recommendations. As a result, it has recently reached out to one of the largest U.S. construction contractors and would continue outreach efforts with other large firms. Although we do present an analysis that shows a significant number of large contractors with overseas experience have not been part of the NEC program, we do not recommend that State rely solely on large contractors. 4. State questioned why we chose to highlight the declines in the number of firms prequalifying to bid on NECs from 2006 to 2008 when that decline was not statistically significant. We explain within the text that there was an overall decline in the number of firms prequalified to bid on NEC projects from 2002 to 2008. We also state that the large number of firms that prequalified in 2006 is likely explained by the relatively low cost for the projects awarded that year. We also note that from 2006 to 2008, the average number of firms prequalfied per project decreased by 69 percent from 13 to 4. When considered with the declines in the number of firms bidding on projects, these declines indicate a decrease in contractor participation, especially in recent years. 5. State commented that the results of our contractor interviews could be biased from our exclusion of a small contractor with a limited-sized claim and our inclusion of a large contractor with current claims of more than $90 million. We do not believe our analysis was affected by contractor bias. The draft report incorrectly stated the reason for excluding the first contractor State cited from the structured interviews. The contractor was excluded primarily because its only contract from 2001 to 2007 was awarded as a sole-source contract. Because the contractor received a sole-source contract, we believed it could not address many of the questions involving competition for NEC awards. We have made this correction in appendix I. We note this same contractor was terminated from its contract for nonperformance and subsequently filed for and received bankruptcy protection as a result of the termination. As a result, we felt its inclusion could risk biasing our work, but the contractor was not excluded for this reason. The second contractor State cited was a major participant in the NEC program, having completed 17 NEC and other projects from 1999 to 2008. We are aware of the claims the contractor has filed for a number of projects it performed, but we do not believe the existence of those claims biased our discussions with that firm or our findings, in general. 6. State disagreed with many of our comments in a draft of this report on the role of the new project manager positions. In particular, State commented that project managers are not intended in any way to replace existing oversight during the construction phase as stated in our report. Our draft report did not, in fact, indicate that project managers should replace existing oversight. Rather, our intent was to question whether the project manager’s effectiveness in performing lifecycle oversight of a project could be compromised by sharing reporting responsibilities with the construction executive during construction phase. Nonetheless, in light of State’s comments, and to avoid confusion, we deleted the paragraph on which State’s comments are based. 7. State clarified that project directors could approve contract modifications of up to $25,000 per change and up to $250,000 per year. Changes were made to the text based on State’s comment. Jess T. Ford, (202) 512-4128 or [email protected], Terrell G. Dorn, (202) 512-6923 or [email protected]. In addition to the individuals named above, Michael Courts, Assistant Director; Michael Armes; John Bauckman; Sam Bernet; Eugene Beye; Paola Bobadilla; and Joseph Carney made key contributions to this report. Ashley Alley, Martin De Alteriis, Colleen Candrl, Jonathon Fremont, Elizabeth Helmer, Cardell Johnson, Dae Park, and William Tuceling provided key technical support to this report.
To provide safe and secure workplaces for overseas posts, the Department of State (State) has built 64 new embassy compounds (NEC) and other facilities since 1999, has 31 ongoing projects, and plans to build at least 90 more. In 2007, State reported the U.S. contractor pool for building NECs had reached its limit and proposed legislation to amend the criteria to qualify for NEC awards. GAO was asked to examine (1) how contractor participation in the NEC program changed in recent years, (2) the degree to which State assessed the need for and potential outcomes of its proposed amendment, (3) factors contractors consider when deciding to participate in the program, and (4) actions State has taken to address reported declines in contractor participation. GAO examined two indicators of contractor participation; reviewed State documents and proposed legislation; and interviewed State officials and U.S. firms that won NEC awards from 2001-2007. State received at least two bids--the legislatively specified minimum for adequate competition--for 60 of the 61 NEC projects it awarded from 1999-2008, and received three or more bids for at least 49 of the 61. Nonetheless, there was a statistically significant decline in the number of bids per NEC project from 2002 to 2008. GAO also found that the number of firms prequalified to bid on NEC projects also declined during this period. While many factors could affect contractor participation, GAO found the declines in the number of prequalifying firms and bids received were due, in part, to rising construction costs, which made it more difficult for some firms to meet qualification criteria. In addition, officials from five firms cited insufficient profits and State management practices as reasons for their recent withdrawals from the program. State has not systematically assessed the need for, or the possible outcomes of, its legislative proposal that would open competition for NEC awards to construction firms that cannot meet current qualification criteria. Although State identified several factors it believed reduced contractor participation, it has not assessed whether a sufficient number of contractors capable of meeting current requirements exists or how its legislative proposal would affect the NEC program. Specifically, State has not assessed the potential benefits or identified the potential risks of its legislative proposal, and has not stated how the risks would be mitigated. Absent these analyses, it is unclear whether the proposed amendment, including its December 2008 revision, would benefit State's embassy construction program. Contractors interviewed by GAO cited various incentives and challenges that affected their decision to participate in the NEC program. Although making profits was cited as the primary incentive for participating, contractors reported losing money on 42 percent of the contracts they performed. Contractors also cited several significant challenges that affected their decisions to submit contract proposals, including meeting State's shortened construction schedules, supplying labor and material to remote locations, finding and retaining cleared American workers, managing financial constraints, and dealing with foreign governments. Firms also expressed concerns with State's processes, including unclear solicitation documents and contract requirements, laborious design reviews, and State's 2001 decision to end formal partnering relationships with contractors. State has made several recent efforts to encourage contractors' participation in the NEC program. State has begun new outreach efforts to improve relations with contractors, and undertaken several changes to its management practices and organizational structures, including lengthening project schedules, improving clarity of contract requirements, and establishing a project management group to provide coordination and oversight throughout each phase of a project. While these changes address some contractor complaints, their full effects may not be apparent for a number of years.
Tax expenditures are provisions of the tax code that are viewed as exceptions to the “normal structure” of the individual and corporate income tax (i.e., exceptions to taxing income). They take the form of exemptions, exclusions, deductions, credits, deferrals, and preferential tax rates; however, not all such provisions are tax expenditures. For example, some provisions that determine tax liability, such as business expense deductions, are not considered to be tax expenditures because costs of earning income are usually deducted in calculating taxable income for businesses. Generally, tax expenditures grant special tax relief for certain kinds of behavior by taxpayers or for taxpayers in special circumstances. Holding tax rates constant, tax expenditures result in forgone tax revenue the government incurs by granting the relief. Many of these provisions may, in effect, be viewed as spending programs channeled through the tax system. Congress updated the statutory framework for performance management in the federal government, the Government Performance and Results Act of 1993 (GPRA), with the GPRA Modernization Act of 2010 (GPRAMA). Both acts require agencies to set goals and measure and report the performance of their programs. GPRAMA introduced a more integrated and crosscutting approach to performance measurement that cuts across organizational boundaries. The act requires that OMB, in coordination with agencies, develop long-term crosscutting priority goals to improve performance and management of the government. OMB is to coordinate annually with agencies to develop a federal government performance plan which establishes performance indicators for achieving these goals. Moreover, GPRAMA requires that this plan identify the tax expenditures that contribute to each crosscutting priority goal. As we noted in a recent report, sporadic progress has been made along these lines. OMB Circular A-11 guidance directs agencies to list tax expenditures among the various programs and activities that contribute to the subset of performance goals that are designated as agency priority goals. A performance evaluation of a tax expenditure program would use largely the same concepts, methods, and types of data as an evaluation of an outlay program. In prior reports, we have described in some detail how such program evaluations would be conducted to measure progress toward achieving the program’s intended purpose. Even if a tax expenditure is meeting its intended purpose, broader questions can be asked about its effects beyond that purpose. Specifically, the long standing criteria of fairness, economic efficiency, transparency, simplicity, and administrability can be used to evaluate whether a tax expenditure is good tax policy. Some agencies may be better positioned to collect tax expenditure information and make it available for analysis than others. As we said in our Guide for Evaluating Tax Expenditures, for a tax expenditure that is part of a crosscutting agency priority goal, the responsible agencies identified in the related performance plan may be the logical agencies responsible for evaluating the tax expenditure. Although IRS is the federal agency responsible for administering tax expenditures, it is not responsible for the program areas targeted by many tax expenditures. The information available at IRS is generally limited by the Paperwork Reduction Act to data used for tax administration, not for performance evaluation. Of the 163 tax expenditures identified by Treasury for tax year 2011, 102, or 63 percent, were not on a tax return, information return, or other tax form; or they were on these tax forms but did not have their own line item, as shown in table 1. For these tax expenditures, the tax forms do not capture information on who claimed the tax expenditures and how much they claimed. An example of a tax expenditure not on a tax form is the exclusion of interest on life insurance savings where the taxpayer is not asked to report the amount of the exclusion anywhere on a tax form, while an example of a tax expenditure without its own line item is the credit for holding clean renewable energy bonds where the credit is aggregated with other credits on a single line item. Nearly all deferrals or exclusions were either not on a tax form or did not have their own line item. For information on our classification by specific tax expenditure, see appendix II. If a tax expenditure has its own line item on a tax form, the IRS can identify the claimant and amount of the claim. These account for about half of the total tax expenditures. As shown in figure 1, these accounted for $501 billion of the almost $1 trillion of revenue estimated by Treasury in 2011 for the tax expenditures that we analyzed. The remaining $492 billion were not on tax forms or did not have their own line items. Having such basic information about tax expenditures can facilitate certain kinds of analysis. Specifically, when a tax expenditure has its own line item, the claimant can be matched to his or her income which is also reported on the tax return. This linkage facilitates analyses of the distributional effects of a tax expenditure by showing tax expenditure use by income category. The sum of tax expenditure revenue loss estimates that appear in figure 1 approximates the total revenue forgone through tax expenditure provisions. While sufficiently reliable as a gauge of general magnitude, the sum of the individual revenue loss estimates has important limitations in that any interactions between tax expenditures will not be reflected in the sum. Data necessary to assess how often a tax expenditure is used and by whom generally would not be collected on tax returns unless IRS needs the information to know the correct amount of taxes owed or is legislatively mandated to collect or report the information. IRS is obligated under the Paperwork Reduction Act to keep the administrative burden on taxpayers as low as possible, while still fulfilling its mission. In prior reports, we identified tax expenditures that could not be evaluated because appropriate data were not available from any source, including sources other than IRS. One example is Indian reservation depreciation (IRD), where IRS did not collect information on the identity of claimants, amounts claimed, or the location of the qualified investment. In addition, we could not find reliable data at other agencies on which taxpayers use IRD, how much IRD investment was made, or whether the provision was having a positive effect on economic development. For some tax expenditures, IRS data limitations can be remedied to some extent by information available from other federal agencies. For example, for Empowerment Zone (EZ) employment tax credits, IRS cannot separate the total credits claimed to show how much was claimed for specific EZ communities. This limitation is partially remedied by the Department of Housing and Urban Development (HUD), which collects community level information for some EZ-related tax expenditures. However, as we have previously reported, HUD was unable to validate the information on the use of some of these tax expenditures and it tracks only a portion of the EZ employment credits. HUD and IRS have begun collaborating to produce better data on the use of EZ tax credits. For some tax expenditures, it may be possible to estimate missing IRS information using other sources such as public records, state agency records, and surveys. However, in general, such estimates cannot be expected to be as precise as data from tax returns. For example, in the case of an evaluation of the Research Tax Credit, a measure of spending that qualifies for the credit derived from research spending as reported on corporate annual reports will not be as accurate as a measure derived from corporate tax returns because of differences in the tax and accounting rules for reporting the spending. The less accurate data can lead to less reliable conclusions from the evaluation. After reviewing the GPRAMA-mandated cross-agency priority (CAP) goals established by OMB and federal agencies, we chose four outlay programs—three addressing energy efficiency and one addressing job training—that we considered to be comparable to certain tax expenditures based on their similar purposes. Table 2 provides descriptions of these tax expenditures and comparable outlay programs. As shown in table 3, the four comparable outlay programs and tax expenditures associated with DOE and DOL had broadly similar purposes in the areas of energy conservation and employment. As shown in table 4, DOE and DOL produced performance measures and goals for outlay programs in their annual reports but did not do so for the comparable tax expenditures. Agencies were not required by GPRAMA to produce these measures for the tax expenditures. Also, IRS collects the basic information about claimants and the amounts claimed for the four tax expenditures in our case studies. (See appendix II table 5). But, since IRS is not tasked with evaluating tax expenditures, it has not formulated performance measures or goals for these tax expenditures. The performance measures shown in table 4 can track the progress of the outlay programs, on an ongoing basis, toward specific goals (stated in terms of number of gallons produced, number of turbines installed, etc.). However, additional data may be needed for an assessment of broader purposes and the impact of the programs. For example, for the vehicle technologies program, the purpose of reducing petroleum consumption can be measured by the performance measure in table 4 (gallons of petroleum saved) but additional data are needed to measure the outlay program’s broader purpose of reducing environmental impacts. With so much spending going through the tax code in the form of tax expenditures, the need to determine whether this spending is achieving its purpose becomes more pressing. This report identifies gaps in the data required to evaluate tax expenditures but makes no recommendations on how to fill these gaps. A key step in collecting the data is first determining who should undertake this task. As we said in our guide for evaluating tax expenditures, the agency or agencies responsible for the program ought to determine what data should be collected to evaluate tax expenditures relevant to their goals. We recommended in our 2005 report that the Director of OMB, in consultation with the Secretary of the Treasury, determine which agencies will have leadership responsibilities to review tax expenditures and how to address the lack of credible performance information on tax expenditures. However, these agencies have not yet been identified. GPRAMA may make a start on answering the question of who should evaluate tax expenditures by requiring that the responsible agencies identify the various program activities that contribute to their goals, which we believe should include tax expenditures. The IRS provided technical comments after viewing a draft of this report, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we are sending copies of this report to the Acting Commissioner of Internal Revenue and other interested parties. This report will also be available at no charge on GAO’s website at http://www.gao.gov. If you have any questions on this report, please contact me at (202) 512- 9110 or [email protected]. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. To determine what Internal Revenue Service (IRS) data are available for evaluating tax expenditures, we used 173 tax expenditures for fiscal year 2011 that were developed by the Department of the Treasury (Treasury) and reported by the Office of Management and Budget (OMB) in Analytical Perspectives, Budget of the United States Government, Fiscal Year 2012. We reviewed IRS tax returns, tax forms, information returns, and publications for tax year 2011 and categorized the tax expenditures based on whether they were (1) not listed on tax forms, (2) listed on tax forms but did not have their own line item, or (3) listed on tax forms and had their own line item so the claimants and the amount claimed could be identified. The tax returns we reviewed were primarily Form 1040 for individual taxpayers, Form 1120 for corporate taxpayers, and Form 990 for tax-exempt organizations. Although the tax expenditure concept can also be applied to other kinds of taxes, such as excise taxes, this report covers only tax expenditures for the federal income tax system. We sent a list of the tax expenditures that we initially identified as not appearing on a tax form to IRS for verification of our assignment of these tax expenditures to this category. (IRS was not able to verify our assignment of all of the tax expenditures to their categories due to time constraints encountered as the agency readied for the tax filing season.) In addition, when IRS verified those not listed on a tax form, it generally used information from tax returns but not from information returns. Our assignment of the tax expenditures to the three categories sometimes required that we make judgments about the adequacy of the information on the tax forms. For example, according to IRS, the amount of deferred income from installment sales can be obtained from Form 6252 (Installment Sale Income) by subtracting the installment sale income line item from the gross profit line item. However, according to OMB, this difference does not represent the amount of the tax expenditure. The tax expenditure is the deferred amount less than $5 million for which non- dealers are not required to pay interest on their deferred taxes. Therefore, since we could not identify the deferral amount for non-dealers, and the amount of the tax expenditure deferral does not have its own line item, we classified it as a tax expenditure that is on a tax form but does not have its own line item. During our matching, we identified 10 tax expenditures that we did not include in our analysis because (1) they were not available in tax year 2011, such as the Hope Tax Credit which was temporarily replaced by the American Opportunity Tax Credit; (2) some but not all parts of the tax expenditure were on a tax form, such as the Exclusion of Benefits and Allowances to Armed Forces Personnel where only the combat pay portion was reported on a tax form—Form W-2 (Wage and Tax Statement); and (3) where reporting of the tax expenditure was optional, such as Employer Plans on Form W-2. Some tax expenditures use multiple tax forms, multiple line items, or both on the forms to account for all parts of the tax expenditure. When multiple forms or line items were used, we considered the tax expenditure as having its own line item when all parts of the tax expenditure were on tax forms and had their own line items. For example, the Adoption Credit and Exclusion tax expenditure lists the credit and exclusion on different line items of the Form 8839 (Qualified Adoption Expenses). Therefore, we considered this tax expenditure as having its own line item. We used the fiscal year 2013 budget, for fiscal year 2011 revised estimates, to identify the tax expenditure amounts. We chose the tax expenditure estimates reported in the budget for our analysis because Treasury develops revised estimates based on changes in tax policy and economic activity for the year prior to the reported fiscal budget year (i.e., retrospective estimates). Even though Treasury’s estimates are retrospective, the final reported numbers are still estimates and may not reflect additional policy changes. In addition, tax expenditure revenue loss estimates for specific provisions do not take into account potential behavioral responses to changes in these provisions on the part of taxpayers. These revenue loss estimates do not represent the amount of revenue that would be gained if certain tax expenditures were repealed since repeal would probably change taxpayer behavior in some way that would affect revenue. For tax expenditures that were listed on tax forms, we reviewed IRS’s Statistics of Income (SOI) Proposed Tax Year 2010 Forms and Schedules to determine whether SOI collected data for tax year 2010, the latest year available. We also reviewed SOI publications to identify the types of available information and whether they included tax expenditures. We reviewed our prior reports to identify instances where IRS data available for evaluating tax expenditures were limited and the limitations were not remedied by data from other sources. To analyze examples of data that agencies used to evaluate outlay programs that are comparable to tax expenditures, we visited the performance.gov web site on October 19, 2012. We reviewed the Government Performance and Results Act Modernization Act of 2010- mandated crosscutting, or what OMB calls cross-agency priority (CAP), goals for contributing agencies and programs that explicitly included tax expenditures among their policy initiatives. As examples of tax expenditures, we chose five tax credits from the sixteen tax expenditures listed under the CAP goal of energy efficiency and the one tax credit listed under the CAP goal of job training. As examples of what we considered to be comparable outlay programs, we chose three energy efficiency and renewable energy outlay programs, Weatherization, Vehicle Technologies, and Renewable Energy, and one Department of Labor-based outlay program, the Workforce Investment Act as it pertained to dislocated workers. We then reviewed the 2011 annual performance reports from the Department of Energy and the Department of Labor. We used fiscal year 2011 performance reports so the performance data would be comparable to the tax expenditure data we analyzed for tax year 2011. These performance reports were not available for fiscal year 2012. We identified their performance measures and goals, as well as the data they used to evaluate and assess these outlay programs. Lastly, we used our own criteria for performance measures and examples of data used to construct them. Table 2 provides a more detailed description of the tax expenditures and comparable outlay programs. To determine whether tax expenditures were included on tax forms and had their own line items, we matched the Department of Treasury’s list of tax expenditures for fiscal year 2011 to Internal Revenue Service tax forms for tax year 2011. The relationships of the tax expenditures to tax forms are shown in table 5. In addition to the contact name above, Kevin Daly, (Assistant Director), Laurie King (Analyst-in-Charge), Jeff Arkin, Elizabeth Curda, Robert Gebhart, Lois Hanshaw, Benjamin Licht, Ed Nannenhorn, Karen O’Conor, Michael O’Neill, Robert Robinson, Alan Rozzi, MaryLynn Sergent, Stephanie Shipman, and Anne Stevens all made contributions to this report.
By one measure, tax expenditures resulted in an estimated $1 trillion of revenue forgone by the federal government in fiscal year 2011. GAO has recommended greater scrutiny of tax expenditures, as periodic reviews could help determine how well specific tax expenditures achieve their goals and how their benefits and costs compare to those of other programs with similar goals. To assist with this, GAO recently issued a guide ( GAO-13-167SP ) for evaluating the performance of tax expenditures. GAO was asked to identify data needed for evaluating tax expenditures and its availability. This report: (1) determines the information available from IRS for evaluating tax expenditures; and (2) compares, for a few case studies, the information identified by federal agencies for evaluating outlay programs with similar purposes to tax expenditures. To address these objectives, GAO analyzed 173 tax expenditures, and information from IRS tax forms, federal agency performance reports, and prior GAO reports. Internal Revenue Service (IRS) data are not sufficient for identifying who claims a tax expenditure and how much they claim for $492 billion or almost half the dollar value of all tax expenditures that GAO examined. Such basic data are not available at IRS for tax expenditures because they do not have their own line item on a tax form. This included $102 billion of tax expenditures that were not on tax forms, such as the exclusion of interest on life insurance savings, and $390 billion of tax expenditures that were on tax forms but did not have their own line items, such as the credit for holding clean renewable energy bonds which is aggregated with other credits on a single line item. In four cases in which the Office of Management and Budget (OMB) identified outlay programs and comparable tax expenditure programs that shared similar purposes, the related agencies produced performance measures and goals only for the outlay programs and not for the comparable tax expenditures. For example, OMB identified the Alternative Technology Vehicle Credit as having a comparable purpose to the Department of Energy (DOE) Vehicle Technologies outlay program--both are intended to create more fuel efficient modes of transportation. DOE produced a performance measure and goal for the outlay program--petroleum consumption reduced by 570 million gallons per year by 2011--as required under the provisions of the Government Performance and Results Act of 1993 and the Government Performance and Results Act Modernization Act of 2010. However, DOE did not produce measures and goals for the comparable tax expenditure as neither act requires DOE or other federal agencies to do so. Although IRS is responsible for administering these tax expenditures, it is required by law, unless otherwise directed by Congress, to collect only data which are required for administration of the tax code. GAO has recommended that the agencies responsible for tax expenditures be identified and the lack of credible performance data be addressed. GAO made no recommendations in this report. IRS provided technical comments that were incorporated as appropriate.
Congress created GSA in 1949 through the Federal Property and Administrative Services Act to serve as a centralized property management agency with one of its responsibilities to provide space to federal agencies as economical as possible. The GSA Administrator may delegate and may authorize successive redelegations of the real property authority vested in the Administrator to any federal agency. Federal agencies must exercise delegated real property authority and functions according to the parameters described in each delegation of authority document, and the agencies may only exercise the authority of the Administrator that is specifically provided within the delegation of authority. GSA officials told us the ability of the GSA Administrator to delegate real property authority is a tool provided by Congress to enable GSA to carry out its various real property responsibilities. The delegations are not managed as a single program, rather the various delegations, once granted, are managed and administered in the appropriate PBS business line. According to federal regulations, GSA may delegate authority to federal agencies to conduct the following activities:Real estate leasing authority: This authority allows agencies to perform all functions necessary to acquire leased space, including procurement and administering, managing and enforcing the leases. Agencies have the option to use one of three types of delegations of real estate leasing authority granted by the GSA Administrator: general purpose, categorical, and special purpose. See table 1 for a description of the types of real estate leasing authority delegations. In May 2005, GSA issued guidance that reemphasized and modified certain procedures associated with the use of the general purpose, categorical, and special purpose leasing delegations. GSA requires agencies to meet several general conditions to use the real estate leasing authority delegations, including (1) the agency must receive written confirmation from the appropriate Assistant Regional Administrator that suitable government-controlled space is not available before relocating government employees from GSA controlled space; (2) the average annual rent is below the prospectus level, as previously described; (3) agency staff using the authority must meet the relevant contracting experience and training requirements; (4) the agency must acquire and use the space in accordance with all applicable laws and regulations for federal space acquisition activities; (5) the agency must have the capacity to perform all delegated leasing activities; and (6) the agency must provide semi-annual reports to OGP on April 30 and October 31 that detail the leasing activities conducted under the delegations. GSA retains the right to review each lease and the capacity of the agency to perform the delegation and, if necessary, to revoke the delegation. Agencies using the general purpose leasing delegation are also required to provide the following information to the appropriate GSA regional office: upon award of the lease, provide notification of the award date and location of the property, including documentation that the negotiated rental rate is within the prevailing market rental rate for the class of building leased and provide 18 months advance notice of lease expiration if there is a continuing need for the space and the agency wishes to use the delegation again. Administrative contracting officer authority: This authority allows the agencies to manage the administration of one or more lease contracts awarded by GSA and perform such duties as paying and withholding rent and modifying lease provisions that do not change the length of the lease or the amount of space under the lease. To obtain this delegation, an agency must occupy at least 90 percent of the building’s GSA-controlled space or have the written concurrence of 100 percent of rent-paying occupants covered under the lease, and have the technical capability to perform the lease. Agencies seeking a delegation must submit a written request to the regional headquarters where the building is located. If PBS staff at the region concurs with the request, an agreement is drafted for the delegation of administrative contracting officer authority and sent to the PBS Commissioner and the GSA Administrator for approval. An administrative contracting officer delegation lasts until the lease expires, or the space reverts to GSA unless the agency or GSA agrees to terminate the delegation. Lease management authority: This authority allows agencies to manage the administration of one or more lease contracts awarded by GSA. To obtain this delegation, an agency must occupy at least 90 percent of the building’s GSA-controlled space or have the written concurrence of 100 percent of rent-paying occupants covered under the lease, and have the technical capability to perform the lease. Agencies seeking a delegation must submit a written request to the regional headquarters where the building is located. If PBS staff at the region concurs with the request, a memorandum of understanding is drafted and sent to the PBS Commissioner and the GSA Administrator for approval. The term of the delegation lasts until the lease expires and either agency is free to terminate at any time. In addition to this process, GSA’s contracting officers can delegate lease management authority to qualified individuals, upon request, for specific leases. Operations and maintenance authority: This authority, which was established in 1983, allows agencies to manage and operate GSA-owned and leased buildings on a day-to-day basis. Delegated functions may include among others, maintenance, recurring repairs, and minor alterations. To obtain this type of delegation, an agency must occupy at least 90 percent of the space in the GSA-controlled facility or (1) have the concurrence of 100 percent of the rent-paying occupants to perform these functions, (2) demonstrate that it can perform the delegated responsibilities, and (3) document that the delegation will be cost effective. Agencies seeking this authority must first notify the region where the space is located by submitting a formal request. After regional staff has reviewed the request, it is forwarded to the GSA Administrator for a final decision. The Administrator can then grant or decline the request, with concurrence from applicable program offices. A delegation of authority generally lasts until the space is returned to GSA or the space is no longer needed. Delegation agreements allow for either the agency or GSA to terminate a delegation in full or in part. Repair and alteration project authority: This authority allows agencies to perform repair and alteration projects. With respect to repair and alteration delegations, there is a statute relating to the delegation of repair and alteration projects of $100,000 or less. This statute provides that in accordance with standards prescribed by the GSA Administrator, the Administrator shall delegate requests to an agency for projects in public buildings when the estimated cost does not exceed $100,000. Under GSA’s general authority to delegate its real property activities at 40 U.S.C. § 121, in January 1997, the GSA Administrator granted blanket delegation authority in leased space for repair and alteration projects up to $100,000 for an indefinite term. According to GSA officials, the regions are responsible for managing these delegations. The statute further provides that the GSA Administrator may delegate to an agency projects that are estimated to cost more than $100,000 when the Administrator determines the delegation promotes efficiency and economy. According to the Federal Management Regulation, GSA can delegate individual alteration projects greater than $100,000 when the agency demonstrates the ability to perform the delegated repair and alteration project responsibilities and when such a delegation promotes efficiency and economy. According to GSA officials, individual requests for delegations of repair and alteration project authority greater than $100,000, which are rarely received by GSA, are granted only by the Administrator of GSA. The scope of the intended project must be included in an agency’s request for a delegation and is reviewed first by the relevant GSA region and then by PBS central office staff before being submitted to the Administrator with a recommendation either to grant or refuse authorizing the delegation. The term of delegation is for the duration of the project and either party can terminate the delegation at any time. If a delegated repair and alteration project is expected to exceed the prospectus level, GSA will submit the proposed project to its authorizing committees for review and approval. Utility services authority: This authority allows agencies to negotiate and execute utility services contracts for periods of more than 1 year but not exceeding 10 years for their use and benefit. Agencies also have the authority to intervene in utility rate proceedings to represent the consumer interests of the federal government, if so provided in the delegation of authority. Agencies seeking utility delegations are required to submit their request to PBS’s Energy Center of Expertise, which procures utility services for GSA’s customer agencies. The requests must include a certification from the acquiring agency’s senior procurement executive that the agency has an established acquisition program, personnel technically qualified to deal with specialized utilities problems, and the ability to accomplish certain contracting requirements. The Energy Center reviews the request for compliance with the requirements and conducts an internal analysis of federal utility needs in the specified area. Upon approval of the agency’s qualifications to perform the delegation, and a determination that there is minimal if any additional federal utility needs in the service area, a formal delegation of authority for a utility acquisition letter is then prepared for the GSA Administrator’s signature. GSA delegated authority for operations and maintenance, utility services, lease management, administrative contracting officer activities, repair and alterations, and real estate leasing to its tenant agencies. However, as shown in table 2, GSA did not have complete or consistent data on the number of delegations of repair and alteration project authority up to $100,000 and real estate leasing authority. GSA generally requires agencies to seek its approval before using delegations of real property authority. However, GSA is required by law to issue delegations to requesting agencies for repair and alteration projects in public buildings that are not expected to exceed $100,000; in leased space, GSA has issued a standing delegation under its general authority for these types of projects. Additionally, GSA issued standing delegations to allow agencies to enter into certain types of leases without having to first obtain its approval. GSA officials told us that they believe the lack of complete data for delegations of repair and alteration project authority up to $100,000 was not problematic. GSA said these delegations are for small projects with limited program risk; and according to GSA officials, the GSA regional offices would identify and report any potential problems to the central office. However, real estate lease delegations involve more risk, and without accurate data on the number of leases awarded using these delegations, GSA is missing an important management control to assess their impact. Operations and maintenance authority: GSA reported that its tenant agencies exercised 43 delegations of operations and maintenance authority from fiscal years 1996 to 2006, representing 203 buildings. As shown in figure 1, the Department of Defense had the most delegations, representing 84 buildings and approximately 4 million square feet. PBS officials did not have exact issuance dates for these delegations; however, they estimated that the majority of the delegations were originally issued on or before 1989 for a term of 5 years and subsequently redelegated without defined terms. PBS officials said they rarely receive requests for new operations and maintenance delegations. Since 2000, GSA issued only one new delegation of this type to the agencies included in our review. PBS officials also said that they rarely decline requests for delegations of operations and maintenance authority because PBS works with the agencies to determine that they meet the requirements before they formally submit the request. Utility services authority: GSA also reported 52 utility services delegations from fiscal years 1996 to 2006 to agencies that had custody or control of their facilities. As shown in figure 2, the Department of Interior had the most delegations, all of which were for remote sites of the Bureau of Indian Affairs and the Bureau of Reclamation. Lease management authority: GSA reported that, for fiscal years 1996 to 2006, it delegated lease management authority for 16 leases to its tenant agencies. GSA does not require the regional offices to report these delegations to the central office, and the central office does not routinely request or monitor information on these activities. GSA officials said these delegations are self-correcting — meaning the limited authority provided under these delegations is controlled by the GSA contracting officer, which minimizes the risk that the agency could exceed the authority of the delegation. Additionally, GSA officials said they had not seen a pattern of problems that would indicate a need for more oversight of these delegations. Administrative contracting officer authority: GSA reported that, for fiscal years 2000 to 2006, it delegated administrative contracting officer authority for 136 leases to its tenant agencies. But GSA did not have data for fiscal years 1996 to 1999 because the database used to track these delegations did not have historical data before fiscal year 2000. As shown in figure 3, all of the delegations went to the Department of Commerce, the Department of Defense, and the EPA. In addition, most of the delegated activity was in the National Capital Region. Repair and alteration project authority: GSA reported granting one individual repair and alteration delegation above $100,000 to the EPA, but did not have data for its blanket delegations of repair and alteration authority up to $100,000 as these delegations are managed at the regional level, and GSA does not require the regional offices to report these delegations to the central office. GSA officials said that regional staff would report to the central office any significant issues or problems resulting from the blanket delegations, and based on anecdotal evidence, they have not seen a pattern of problems that would indicate a need for more oversight of these delegations. Real estate leasing authority: Two separate offices in GSA collect disparate sets of data on delegations of real estate leasing authority. PBS requires the regional offices to report how many general purpose leasing delegation requests are received and how many are issued, which may or may not ultimately result in the requesting agency actually awarding a lease. PBS reported that, for fiscal years 2001 to 2006, it issued 190 lease delegations to its tenant agencies. However, PBS did not have data from fiscal years 1996 through 2000 because, according to PBS officials, the data were misplaced through various internal reorganizations. In addition, PBS did not collect data on categorical and special purpose delegations. Agencies are not required to notify GSA prior to using the categorical lease delegation, except for leases above the prospectus threshold as previously described and leases for parking. Special purpose delegations also do not require GSA approval unless the space exceeds 2,500 square feet. PBS officials stated that they focus their management efforts on the general purpose lease delegations because they have the authority to approve or disapprove use of this delegation type, whereas agencies can generally use the categorical and special purpose lease delegations without GSA approval. Lastly, previous reviews of the general purpose lease delegation program by OGP found, among other things, several instances where federal agencies did not notify the relevant PBS office of its intent to exercise the delegation of authority, making it difficult for PBS to track these delegations. The data that PBS has on real estate lease delegations are inconsistent with the data that OGP collects. In 1996, OGP was asked to provide an oversight role, serving as an “honest broker” between PBS and the federal agencies. PBS officials told us the Office of Management and Budget (OMB) and Congress, at the time, wanted independent oversight of the delegations because they were concerned that agencies may not have the expertise to obtain the best deal for the government. They also viewed PBS as having an inherent conflict of interest when deciding delegations. In other words, OMB and Congress believed that PBS could stand to lose a significant amount of its leasing business due to the delegations and therefore did not view PBS as an independent overseer of the delegation program. According to GSA’s guidance on delegations of real estate leasing authority, federal agencies are to report to OGP every 6 months on their delegated leasing activity for all three types of lease delegations. OGP reported that GSA’s tenant agencies entered into 594 leases using the three different leasing authorities from fiscal year 1996 through fiscal year 2006. However, OGP said the data likely undercount the number of exercised lease delegations because the guidance did not define whether agencies were to report all current delegations or only those awarded within the 6 months of any given reporting period. In other words, some agencies reported only those delegations issued during the 6-month period and others reported all current delegations. Both PBS and OGP acknowledged that their data were inconsistent, as shown in figure 4. OGP and PBS did not review each other’s lease delegation data to determine an accurate count of the number of leases awarded using the real estate leasing delegations. According to PBS guidance for delegations of real estate leasing authority, OGP compares the information that the agencies report against delegation information provided by the PBS regions to determine any underreporting by agencies. However, an OGP official told us that OGP is not required to follow PBS guidance and in fact does not compare its data with that provided by PBS. PBS officials acknowledged that OGP is not bound by PBS’s guidance for delegations of real estate leasing authority, but it noted that OGP was involved in drafting the guidance. GSA is implementing several changes to improve its data collection for lease delegations. First, the Federal Real Property Council accepted OGP’s recommendation to add a data field to the governmentwide Federal Real Property Profile inventory system to track the leasing authority used for space acquisition. Agencies are now required to report real property assets by building and to specify whether that asset is owned or leased. If the agency designates the latter, it now must designate the authority under which the asset is leased. According to GSA officials, this requirement is effective for fiscal year 2007 and was included in guidance issued in June 2007. The information will allow the Federal Real Property Council and GSA to better understand the level of delegated leasing that occurs in the federal government using the categorical, special purpose, and general purpose leasing delegations. Additionally, according to GSA’s draft leasing guidance that is scheduled to be issued in September 2007, GSA will no longer require the biannual reporting to OGP of general purpose, categorical, and special purpose lease delegations. In its place, OGP will accept the agency submissions for the Federal Real Property Profile inventory, which, according to GSA officials, should eliminate the agency confusion about the reporting period. GSA has also committed to implementing recommendations from the August 2007 Inspector General report on the lease delegation program. PBS is also drafting separate oversight procedures for delegations of real estate leasing authority. According to PBS officials, the procedures will include a requirement to reconcile the two sources of lease delegation data: the Federal Real Property Inventory Report and PBS. OGP will annually provide a listing of all delegation activity from the Federal Real Property Profile database, and PBS will compare that information with its centralized records. GSA officials said these oversight procedures would be issued in September 2007. According to GAO’s Standards for Internal Controls, managers need program data to determine whether they are meeting their agencies’ goals for accountability for effective and efficient use of resources. GSA officials told us that they believe the lack of complete data for delegations of repair and alteration project authority up to $100,000 was not problematic. Repair and alteration delegations that do not exceed $100,000 involve what GSA considers to be small projects with limited program risk, and any potential problems would be identified and reported to the central office by the regions. Additionally, GSA officials said they had not seen a pattern of problems with these delegations that would indicate a need for more oversight. However, based on the data provided, agencies use the lease delegations more often than other types of delegations. Federal agencies using these delegations may lack experience in acquiring office space, which could result in offices being housed in substandard buildings and the government not receiving the best deal. Without accurate data on the number of leases awarded using the real estate leasing delegations, GSA is missing an important management control to evaluate whether the delegation of real estate leasing authority is operating as intended. Although GSA had written policies and procedures for managing all types of delegations we reviewed, the policies and procedures in certain documents were not always current. In addition, GSA did not always use mandated criteria stated in the Federal Management Regulation—namely determining whether a delegation would be cost effective for the government—when deciding to delegate real property activities. GSA said it used mandated criteria when delegating utility services and for the most recent delegations of individual repair and alteration authority above $100,000 and operations and maintenance authority. However, GSA did not use the criteria when delegating real estate leasing and administrative contracting officer authority and could not determine if it used mandated criteria when delegating lease management authority. GSA’s procedures for assessing cost-effectiveness were not always documented in GSA’s written guidance, which could limit GSA’s ability to determine if the delegations are in the best interests of the government in certain cases. We found that GSA had written policies and procedures for managing all types of delegations, but the policies and procedures were not always current. GSA’s policies and procedures for issuing and managing delegations are described in the following documents: Federal regulations and internal GSA policy letters and memorandums, GSA’s “Desk Guide — Delegations of Authority for Real Property Management and Operating and Leasing,” which states that it “is a reference guide on policies, procedures, and practices for individuals engaged in implementing the terms and conditions of the General Services Administration delegation program and delegation agreements for real property management authorities in federally owned and operated space.” Chapter 8 of GSA’s “Customer Guide to Real Property,” which, according to GSA officials, serves as formal guidance to explain the general procedures for issuing the different types of delegations, and GSA’s “Standard Operating Procedures for Operation and Maintenance of Delegated Real Property,” which describes the agency’s responsibilities under a delegation of operations and maintenance authority. Table 3 identifies the delegation types and the applicable documents that outline the policies and procedures for the delegation. Our review of the policies and procedures found that the desk guide has not been updated to include current guidance for delegations of real estate leasing, lease management, repair and alteration, and utility services authority. For example, the section on delegations of real estate leasing authority did not include the procedures associated with the use of the real estate leasing delegations; procedures for requesting lease management authority were not explained; and delegations for repair and alteration project authority and utility services authority were not discussed. In addition, the customer guide did not distinguish between blanket repair and alteration authority, which can be used for projects up to $100,000 and authority for individual repair and alteration projects above $100,000. As discussed earlier, the approval process for each differs. GSA officials acknowledged the need to update the delegations desk guide and the customer guide and said the updates are in process. According to GAO’s Standards for Internal Controls, written policies and procedures are control activities that help ensure management’s directives are carried out and action are taken to control risks. The lack of updated guidance could limit GSA’s ability to manage its delegations effectively. GSA did not always use mandated cost-effectiveness criteria when delegating activities, as shown in table 4. The Federal Management Regulation states that delegations are to be in the government’s best interest and specifies that GSA must evaluate such factors as whether a delegation would be cost effective for the government in the delivery of space. GSA used these mandated criteria when delegating utility services and for the most recent delegations of individual repair and alteration authority above $100,000 and operations and maintenance authority. But all of the procedures used for assessing cost- effectiveness of delegations of individual repair and alteration authority above $100,000 and operations and maintenance authority were not included in any of GSA’s written guidance. Since GSA is either required by law or has issued a blanket repair and alteration delegation for projects that do not exceed $100,000; it does not apply cost-effectiveness criteria to these delegations. Further, GSA officials told us these delegations have limited financial risk and, based on anecdotal evidence, they had not seen a pattern of problems with these delegations. GSA did not use the criteria when delegating real estate leasing authority and administrative contracting officer authority, and it could not determine if mandated criteria were used when delegating lease management authority. GSA officials told us they are updating their guidance for delegations of general purpose real estate leasing authority to include procedures for cost- effectiveness. The officials also said they are limiting the use of delegations of administrative contracting officer authority and that delegations of lease management authority have limited financial risk, and thus it may not be the best use of resources to develop procedures to determine whether these delegations are cost effective. To determine whether a delegation of utility services authority would be cost effective, GSA identifies the federal presence—that is, the number of federal agencies in a given area—within the utility service area where the requesting facility resides. According to the Director of the Energy Center, most delegation requests are for buildings in areas where there is no other federal need for utility services. Because it takes substantial resources for the center to negotiate public utility contracts with a serving utility, the center generally does not negotiate contracts for individual agency needs. Therefore, the center determined it was cost effective to grant the delegations when there were no additional federal needs requiring an area- wide utility contract in the areas of the requested delegations. PBS officials said they assessed cost-effectiveness prior to granting the individual repair and alteration delegation to EPA. PBS required EPA to submit a justification that demonstrated the delegation was in the government’s best interest and was cost effective. The justification included a cost analysis as part of its management plan, which GSA used to compare against its costs for similar work and other data. Although PBS officials said they used these procedures, they acknowledged that they have not been formalized in any written guidance. GSA recently began assessing the cost-effectiveness of operations and maintenance delegations. Although GSA reviewed the operating costs of agencies with operations and maintenance delegations in the early years of the delegations, it did not assess cost-effectiveness of the delegations. Agencies paid GSA rent for delegated buildings and, until 1997, GSA transferred back to the agencies an amount that GSA estimated it would have spent in the absence of the delegation to provide standard-level building service. To oversee how agencies used this funding, GSA required agencies to submit an annual building operations cost report. However, at GSA’s direction, agencies did not include funding it spent to provide night and weekend building services because GSA considered these costs above standard level. In 1990, we reported that GSA could not determine whether the delegations were cost-effective because it lacked all cost and performance data to oversee the operations and maintenance delegations, and the data it required were frequently inaccurate or sometimes never received. Since our previous review, GSA has issued additional guidance for delegations of operations and maintenance. The customer guide and desk guide state that overall operating costs must be reasonable and not exceed those that GSA would incur. Both guides add that facility operating costs should be included in the delegation. Further, the desk guide states that the operating costs should be derived from and supported by the facility management plan. But the section on the facility management plan in the standard operating procedures does not address submission of building or facility operating costs. According to GSA officials, the financial information provided by the agencies will be compared with industry benchmarks to determine cost-effectiveness. However, GSA has not formalized the benchmark comparison procedure in any of its guidance. GSA officials told us they have issued only one new delegation of operations and maintenance authority since 2000, and they performed an economic evaluation of the request. In contrast, GSA did not consider cost-effectiveness prior to issuing delegations for real estate leasing authority. GSA officials told us that staffing and financial constraints limited their ability to assess cost- effectiveness of real estate leasing authority and that initially (at the beginning of the general purpose delegations in 1996) they had no reason to believe the delegations were not cost effective. GSA does require agencies that use the general purpose lease delegation to provide documentation to the relevant GSA regional office that the negotiated rental rate is within the prevailing market rental rate for the class of building leased. If the negotiated rental rate exceeds the market range, the agency is to provide information as to why the market rate was exceeded. However, these procedures do not apply to special purpose delegations over 2,500 square feet, where GSA has the discretion to issue these delegations. In addition, GSA officials acknowledged that they did not use the information or know the degree to which agencies were in compliance with this requirement and did not validate the market ranges used by agencies. Under GSA’s draft leasing guidance, which is scheduled to be issued in September 2007, agencies using the general purpose leasing delegation will be required to provide a narrative explaining why the granting of the request is in the best interests of the government and a plan for meeting or exceeding GSA’s performance measures (lease cost). GSA will use this and other information to determine whether the requesting agency’s exercise of the delegation is in the government’s best interest. Additionally, GSA will analyze each general purpose lease awarded against the same GSA lease cost performance measure used for GSA leases. GSA also did not consider cost-effectiveness for delegations of administrative contracting officer authority. According to GSA officials, cost-effectiveness was not considered because the intent of the delegation was not to reduce costs, but to improve service delivery. GSA allows agencies with delegations of administrative contracting authority to pay rent directly to the landlords instead of GSA paying the rent to the landlords. In 2004, GSA reviewed these delegations and found that the program was not revenue neutral, but rather had a negative financial impact. This delegation type increased their administrative costs because of the staff time needed to reconcile the funds paid to the agencies with the amount the agencies paid to the landlord. In addition, GSA officials said that in certain cases, agencies were changing the terms of lease agreements to make revisions to the space without GSA’s knowledge, which resulted in increased costs and financial liability to the government. For example, as a part of the 2004 review, GSA found instances of space alterations that were added to the lease agreements by the agency. The space alterations required the government to restore the space to its original condition; however, there was no explanation of the costs or any indication of how the costs would be paid at lease expiration. The review recommended discontinuation of the program by June 2005. GSA decided to offer lease management authority delegations in place of new administrative contracting officer authority delegations if desired by the tenant agencies. Finally, PBS central office officials could not determine if they used mandated criteria when delegating lease management authority because these delegations are managed at the regional level, and GSA does not require the regional offices to report on these delegations to the central office. Further, the central office does not routinely request information on these activities. GSA officials told us there is limited risk associated with delegations of lease management authority as these delegations are structured to prevent the agencies from exceeding the terms of the delegation. Therefore, it may not be the best use of resources to develop and implement procedures to determine whether these delegations are cost effective. According to GAO’s Standards for Internal Controls, written policies and procedures are control activities that help ensure management’s directives are carried out and actions are taken to control risks. The absence of written guidance for all procedures used to assess cost-effectiveness for (1) delegations of individual repair and alteration authority above $100,000, (2) operations and maintenance authority, (3) general purpose leasing delegations and (4) special purpose leasing delegations that exceed 2,500 square feet could limit GSA’s ability to determine if delegations are in the best interests of the government. Of the six tenant agencies we contacted with delegated real property authority, five agencies cited timeliness and control as the main reasons they sought delegations. Officials from all of the agencies we contacted told us their decisions were not based on a lack of satisfaction with GSA’s performance. All of the agencies said they will continue to seek delegations of real property authority in the future. We also contacted the Judiciary, which was terminating its delegations, and agency officials told us it was doing so because the delegations were no longer cost effective. Officials at five of the tenant agencies we contacted cited improved timeliness as the main reason they sought their delegations. In particular, these officials told us their respective delegations provided them with the ability to complete their delegated real property activities in a more timely fashion. For example, officials from the Department of Commerce told us they are able to procure leased space faster than GSA because they believe GSA’s competing demands prevent GSA officials from urgently locating space for the agency. Similarly, Department of Interior officials stated that timeliness was a major benefit of their delegations of real estate authority because GSA officials cannot always make its needs a priority because of GSA’s large workload. Both the Departments of Commerce and Defense officials said their administrative contracting officer authority delegations provided them with increased control and direct access to the landlord, which resulted in faster service. Officials from the Department of Defense and the Social Security Administration said their delegations of operations and maintenance activities provided them with the flexibility to work closely with their own personnel to plan and prioritize service requests in order to fulfill agency needs in a timelier manner. Finally, the EPA said they requested repair and alterations delegations because they needed to expedite the installation of blast mitigation material on windows in GSA- controlled space to be compliant with increased security requirements. An EPA official said these delegations allowed them to complete their projects in a more timely fashion because they had direct access to their contractors. In contrast, officials at the Departments of Interior and Justice said they sought delegations of utility service to obtain stabilized pricing for utilities to protect them from market fluctuations. All of the agencies we interviewed that received real property delegations said their decisions to seek delegations were not based on a lack of satisfaction with GSA’s performance in the given service but rather were useful in certain circumstances where GSA’s knowledge and expertise were less critical. For example, the Departments of Commerce and Interior said they primarily use lease delegations in remote areas where GSA has a minimal presence. In these instances, both agencies said it made sense for them to conduct the lease transactions because they generally had more knowledge of these isolated real estate markets. Both agencies added that their decisions to seek lease delegations were not related to any dissatisfaction with GSA’s leasing program. Further, they said that it was sensible for them to use GSA to acquire leased space in urban areas because of GSA’s expertise with these real estate markets. Officials from the Departments of Commerce and Defense said their use of delegations of administrative contracting officer authority was driven by their desire to leverage direct payments to the landlord to help ensure better service. An official from EPA told us that the agency’s decision to seek a repair and alteration delegation was not a result of any problems with GSA’s services. Finally, the Departments of Interior and Justice told us they request delegations of utility authority when they have a need to contract for utility services, and GSA determines that it is more appropriate for the agency to obtain the contract. GSA’s view of the various types of delegations is that it allows for greater efficiency in use of federal contracting officer authority. Additionally, if utility connection and service is required in an expedited fashion for operations and maintenance purposes, a delegation of utility acquisition authority can allow for timelier services. GSA officials acknowledged that the delegations of utility services allowed for stabilized pricing and delegations of administrative contracting officer authority and operations and maintenance allowed for timelier services. However, GSA officials questioned whether the individual repair and alteration delegation above $100,000 allowed for faster service. GSA told us that EPA used a contractor from GSA’s approved list of contractors to perform these delegations and did not think the work would be performed faster because of the delegation. Similarly, GSA officials questioned whether agencies with lease delegations had more market knowledge in remote areas or were able to complete lease transactions in a more timely fashion. GSA added that agencies do not always provide their space needs in a timely fashion which affects GSA’s ability to provide timely leasing services. Officials from all six agencies that we interviewed said they planned to seek delegations in the future. Officials from the Department of Commerce told us they do not necessarily want to increase their delegations of real estate leasing authority, but they will continue to seek lease delegations when the conditions are conducive. However, department officials said they would like to increase the number of administrative contracting officer authority delegations, but GSA has been reluctant to issue additional delegations. Department of Interior officials said they will continue to seek delegations for space acquisition in remote areas and utility services when needed. Officials from the Department of Defense said they do not plan to aggressively pursue additional operations and maintenance delegations because many of the delegated lease facilities are subject to base realignment and closure and will continue to request delegations of administrative contracting officer authority as needed. Social Security Administration officials said they plan to maintain their current level of operations and maintenance delegations. EPA said it plans to continue to seek delegations of repair and alteration project authority as needed. Finally, the Department of Justice told us it will continue to seek delegations of utility services as needed. While these six agencies told us they would likely continue to seek delegations, one agency is terminating its delegations with GSA. According to PBS officials, the Judiciary terminated authority for one delegation of operations and maintenance and is currently terminating three more. Judiciary officials from the Administrative Office of the U.S. Courts stated that prior to terminating the delegations of operations and maintenance authority, the tenant satisfaction level in their delegated buildings was higher than in nondelegated buildings because the court was aware of their tenants’ operating needs and could respond to repairs and service requirements faster than GSA. However, their decision to terminate the delegation authority was driven by two primary considerations. First, according to the officials, in 2004, GSA shifted responsibility for all repairs, regardless of cost, to delegated agencies with no adjustment to the rent GSA charges. According to the officials, the office was required to repair and maintain systems that were aging and significantly beyond their useful life. The officials stated the shift of more responsibility to perform costly repairs reduced the Judiciary’s funds for preventive maintenance on those aging systems. Second, for buildings without an operations and maintenance delegation, GSA charges an appraised rate for operating expenses based on local comparable buildings. However, according to the officials from the Administrative Office of the U.S. Courts, the actual cost for managing the delegated buildings was higher than what GSA was charging for nondelegated buildings under its appraisal system for some locations. Therefore, the Judiciary did not consider it cost effective to continue the delegations because the operating costs were higher than what GSA charges. GSA acknowledged that Judiciary’s delegated buildings were aging and that the rising expenses of operations and maintenance delegations prompted the Judiciary’s decision to terminate the delegations. However, GSA told us it did not shift responsibility for all repairs to the Judiciary. GSA officials explained that repair activity for delegated buildings is divided between the tenant agency and GSA. According to GSA, the Judiciary was responsible for routine repairs, defined by GSA as items typically expensed to tenants by private sector landlords. GSA was responsible for making necessary replacements to the structure and building systems, which it considered capital replacements. In managing its delegations, GSA lacked basic management controls such as complete and consistent data and current written policies and procedures. In particular, GSA had inconsistent data on delegations of real estate leasing authority. GSA is currently implementing several changes to improve its data collection for lease delegations and will issue separate oversight procedures that include a requirement to reconcile the two sources of lease delegation data. However, it is unclear when the oversight procedures will be issued. GSA has written policies and procedures for managing its delegations, but some of this guidance is out of date. While GSA officials said updates to some of the guidance are in process, it is unclear when these updates will be finalized. Further, GSA also did not use mandated cost-effectiveness criteria when deciding to delegate certain real property authority. These basic management controls are the first line of defense in safeguarding assets and providing effective stewardship of public resources. In the absence of (1) accurate program data on the numbers and types of key authorities delegated, (2) current policies and procedures to help guide decisions to delegate, and (3) complete cost- effectiveness analyses, GSA cannot ensure that delegations are an efficient use of federal dollars or in the best interests of the government. To improve GSA’s ability to oversee the various delegated authorities, we recommend that the Administrator of GSA take the following two actions: develop written procedures for reviewing the different sources of its lease delegation data to identify and determine an accurate count of the leases awarded using all three types of leasing delegations and update the guidance for managing delegations, including procedures for assessing the cost-effectiveness of individual repair and alteration delegations above $100,000, operations and maintenance delegations, general purpose leasing delegations, and special purpose leasing delegations that exceed 2,500 square feet. We provided GSA a draft of this report for its review and comment. GSA agreed with the report’s findings and recommendations and stated it will use them to improve its delegation programs. More specifically, with respect to our first recommendation, GSA stated it is in the process of implementing several changes to improve its data collection for lease delegations. GSA noted that it has requested the Federal Real Property Council to provide PBS the data on delegated leases listed in the Federal Real Property Profile database. GSA agreed to compare annually the agencies’ data sent to GSA with the Federal Real Property Profile database, as agencies will now be required to identify the authority under which they acquired their leased assets. GSA further agreed to develop written guidance for these new procedures. Regarding our second recommendation, GSA said it would review and update, as necessary, the guidance for managing its real property delegations. GSA further noted that it has various types of delegations of authority with unique policies and procedures to administer the specific requirements of the delegation programs. GSA also provided written technical comments, which we have incorporated in this report as appropriate. GSA’s letter is contained in appendix II without the enclosure that contained the technical comments. As agreed with your office, unless you publicly release its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will provide copies to interested congressional committees and the GSA Administrator. We will make copies available to others upon request. The report is available at no charge on GAO’s Web site at http://www.gao.gov. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. If you or your staff have any questions concerning this report, please contact me at (202) 512-7215 or [email protected]. Key contributors to this report were Sally Moino, Assistant Director; Derrick Collins; Susan Michal-Smith; Michael Mgebroff; Courtney Reid; Janay Sam; and Sandra Sokol. Given your interest in the General Services Administration’s (GSA) use of real property delegations of authority, we determined (1) what real property authority GSA has delegated to its tenant agencies, (2) what policies and procedures GSA uses to manage delegated real property authority, and (3) reasons the tenant agencies requested delegated authority. To determine what real property authority GSA delegated to its tenant agencies and the criteria GSA used when deciding to delegate those activities, we reviewed, among other materials, the law, the Federal Management Regulation, and the Federal Acquisition Regulation related to GSA’s authority to delegate real property functions; existing policies and procedures for managing the delegations, including internal Public Buildings Service (PBS) guidance on delegations of real estate leasing authority and PBS’s “Standard Operating Procedures for Operation and Maintenance of Delegated Real Property;” previous Office of Governmentwide Policy reviews of delegations of real estate leasing authority; PBS summaries of the lease delegation program; and data on the number of delegations by type and agency from fiscal years 1996 to 2006. We limited our review to delegations for the following agencies: Departments of Commerce, Defense, Health and Human Services, Homeland Security, Interior, Justice, the Treasury, Environmental Protection Agency, the Judiciary, and the Social Security Administration. These agencies were GSA’s top 10 customers in annual rent for fiscal year 2005 as reported in GSA’s State of the Portfolio. For the purposes of our review, we did not include real property disposal authority or delegations of security authority in our analysis. To assess the reliability of the delegations data we (1) reviewed related documentation, (2) conducted manual testing of certain source databases, and (3) interviewed knowledgeable agency officials about the quality of the data. As a result, we determined that the data were sufficiently reliable for the purposes of this report. Finally, to obtain information on the views and experiences of agencies with delegations we interviewed officials from six agencies. To select the agencies, we reviewed GSA’s data on delegations to its tenant agencies from fiscal years 1996 to 2006 and selected the two agencies with the most delegations in each category for which we had data: real estate leasing authority, administrative contracting officer authority, operations and maintenance authority, repair and alteration project authority, and utility service authority. Because we used a sample selection method, our results are not generalizable to all agencies that received delegations of real property authority. Table 5 provides a listing of the agencies we selected and interviewed. We also interviewed Judiciary officials from the Administrative Office of the U.S. Courts to discuss its decision to return delegations of operations and maintenance authority. We performed our review from August 2006 through June 2007 in accordance with generally accepted government auditing standards.
The General Services Administration (GSA) issues different types of delegations, whereby agencies may request authority to perform certain real property activities, such as leasing space and maintaining property. Effective management of the program is critical to ensuring that federal dollars are well spent and adequate workspace is provided. GAO was asked to determine (1) what real property authority GSA has delegated to its tenant agencies, (2) what policies GSA used to manage delegated authority, and (3) reasons the tenant agencies requested delegated authority. GAO reviewed the law, federal regulations, and GSA policies relating to six types of delegated authority and interviewed GSA officials and officials from six select tenant agencies. GAO analyzed GSA data on delegations issued from fiscal years 1996 to 2006. GSA delegated authority for operations and maintenance, utility services, lease management, administrative contracting officer, repair and alteration activities, and real estate leasing to its tenant agencies. However, GSA did not have complete or consistent data for key delegations. GSA officials believe the lack of complete data for repair and alteration delegations up to $100,000 was not problematic because they involve relatively small projects with limited program risk, and GSA has not noticed a pattern of problems that would warrant increased oversight. Regarding delegations of authority for real estate leasing, two offices within GSA collected separate sets of data. One office collected data on the number of general purpose lease delegations issued while another collected data on the number of lease delegations exercised for three different types of lease delegations (including general purpose, categorical, and special purpose). One office said its data are likely an undercount, and the different sets of data have not been reconciled. GSA is currently implementing several changes to improve its data collection for lease delegations and will issue separate oversight procedures that include a requirement to reconcile the two sources of lease delegation data. However, it is unclear when the oversight procedures will be issued. It is important to have accurate data on lease delegations because these delegations appear to be used more frequently than other delegation types. Federal agencies using these delegations may lack experience in acquiring office space, which could result in the government not receiving the best deal. We found that GSA had written policies and procedures for managing the six types of delegations we reviewed, but the guidance was not always current. GSA officials acknowledged the need to update some of its guidance and said the updates are in process, but it is unclear when these updates will be finalized. Further, GSA officials stated they did not always use mandated cost-effectiveness criteria when deciding to delegate authority for certain delegations due, in part, to staffing constraints. In addition, the procedures used for assessing cost-effectiveness were not always included in written guidance. The lack of updated guidance and limited use of mandated criteria inhibits GSA's ability to manage its delegations and determine if they are in the best interests of the government. According to the six tenant agencies we interviewed, the main reasons agencies sought delegations were the ability to complete their delegated real property activities in a timely manner and prioritize their own service requests, particularly in those cases where GSA's knowledge and expertise were less critical. Most of the six agencies we contacted plan to seek delegations in the future.
The National Aeronautics and Space Administration Authorization Act of 2010 directed NASA to, among another things, develop a Space Launch System as a follow-on to the Space Shuttle and prepare infrastructure at Kennedy Space Center to enable processing and launch of the Space Launch System as a key component in expanding human presence beyond low-Earth orbit. To fulfill this direction, NASA formally established the SLS program in 2011. The agency plans to develop three progressively more capable SLS launch vehicles, complemented by Orion, to transport humans and cargo into space. The first version of the SLS that NASA is developing is a 70-metric ton (mt) launch vehicle known as Block I. In accordance with direction contained in the NASA Authorization Act of 2010, NASA’s acquisition approach for building the initial variant of the SLS is predicated on the use of legacy systems, designs, and contracts from the Space Shuttle and its intended successor Constellation program, which was terminated in 2010 due to factors that included cost and schedule growth. Figure 1 provides details about the heritage of each SLS hardware element and its source as well as identifying the major portions of the Orion crew vehicle. NASA plans to use heritage hardware and new designs as follows: RS-25 engines remaining from the Space Shuttle program to provide power for up to four flights of the SLS, five-segment solid rocket boosters that were developed under the now-canceled Constellation program to provide thrust during the initial minutes of SLS flight, a cryogenic rocket stage used on United Launch Alliance’s Delta IV launch vehicle modified to operate as the Interim Cryogenic Propulsion Stage (ICPS) to provide in-space power for SLS during EM-1, a new core stage, which functions as the SLS’s fuel tank and structural backbone, derived from the Shuttle’s external tank and Ares I upper stage from the Constellation program, a new launch vehicle stage adaptor to attach and integrate the ICPS to the core stage; and a new multi-purpose crew vehicle stage adaptor to attach and integrate the SLS with the Orion vehicle. NASA has committed to be ready to conduct one test flight, EM-1, of the Block I vehicle no later than November 2018. During EM-1, the Block I vehicle is scheduled to launch an uncrewed Orion to a distant orbit some 70,000 kilometers beyond the moon. All three programs—SLS, Orion, and EGS—must be ready on or before this launch readiness date to support this test flight. NASA also intends to build 105- and 130-mt launch vehicles, known respectively as Block IB and Block II, which it expects to use as the backbone of manned spaceflight for decades. NASA anticipates using the Block IB vehicles for destinations such as near-Earth asteroids and Lagrange points and the Block II vehicles for eventual Mars missions. When complete, the 130-mt vehicle is expected to have more launch capability than the Saturn V vehicle, which was used for Apollo missions, and be significantly more capable than any recent or current launch vehicle. To enable processing and launch of the SLS and Orion, NASA established the Ground Systems Development and Operations program in 2012 at Kennedy Space Center. The Ground Systems Development and Operations program consists of the 21st Century Space Launch Complex Initiative and the EGS program. NASA created the 21st Century Space Launch Complex Initiative prior to the establishment of the SLS and Orion programs as a way for Kennedy Space Center to continue to make infrastructure improvements to benefit multiple users in the absence of an ongoing major human exploration program. The EGS program was established to renovate parts of Kennedy Space Center to prepare for SLS and Orion. The program consists of nine major components: the Vehicle Assembly Building, Mobile Launcher, Software, Launch Pad 39B, Crawler-Transporter, Launch Equipment Test Facility, Spacecraft Offline Processing, Launch Vehicle Offline Processing, and Landing and Recovery. See figure 2 for pictures of the Mobile Launcher, Vehicle Assembly Building, Launch Pad 39B, and Crawler-Transporter, and appendix III for a description of the nine EGS components. As the SLS and Orion programs began development, NASA shifted focus away from the 21st Century Space Launch Complex Initiative to the EGS program. For example, in fiscal year 2011, Congress appropriated NASA $142.8 million for the 21st Century Space Launch Complex Initiative and this declined to $39 million in fiscal year 2013, which was a year after EGS began receiving funding. Further, in the fiscal year 2017 president’s budget request, NASA requested $12 million to support the 21st Century Space Launch Complex Initiative. Space launch vehicle development efforts are high risk from technical and programmatic perspectives. The technical risk is inherent for a variety of reasons, including the environment in which launch vehicles operate, complexity of technologies and designs, and limited room for error in the fabrication and integration process. Managing the development process is complex for reasons that go well beyond technology and design. For instance, at the strategic level, because launch vehicle programs can span many years and be very costly, programs can face difficulties securing and sustaining funding commitments and support. At the program level, if the lines of communication between engineers, managers, and senior leaders are not clear, risks that pose significant threats could go unrecognized and unmitigated. If there are pressures to deliver a capability within a short period of time, programs may be incentivized to overlap development and production activities or delete tests, which could result in late discovery of significant technical problems that require more money and ultimately much more time to address. For these reasons, it is imperative that launch vehicle development efforts adopt disciplined practices and lessons learned from past programs. Best practices for acquisition programs indicate that establishing baselines that match cost and schedule resources to requirements and rationally balancing cost, schedule, and performance are key steps in establishing a successful acquisition program. Our work has also shown that validating this match before committing resources to development helps to mitigate the risks inherent in complex acquisition programs such as SLS and EGS. We have reported that within NASA’s acquisition life cycle, resources should be matched to requirements at key decision point (KDP)-C, the review that commits the program to formal cost and schedule baselines and marks the transition from the formulation phase into the implementation phase. Best practices for acquisition programs also indicate that about midway through development, the product’s design should be stable and demonstrate that it is capable of meeting performance requirements. The critical design review is the vehicle for making this determination. These programmatic milestones are called out relative to NASA’s acquisition life-cycle in figure 3 below. NASA approved EM-1 cost and schedule baselines for the SLS program in August 2014 and the EGS program in September 2014, following the completion of each program’s respective KDP-C review. The agency baseline commitment for the SLS program is at the 70 percent confidence level and the agency baseline commitment for the EGS program is at the 80 percent confidence level, which are both in line with NASA’s acquisition policies (see table 1). The confidence level is a probabilistic analysis that provides assurance to stakeholders that programs will meet cost and schedule targets. In addition to the committed cost and launch readiness dates, both programs are working towards internal goals of earlier launch readiness dates and lower costs. NASA considers the time between the programs’ internal goals and their committed launch readiness dates as funded schedule reserve, which is extra time, with the money to pay for it, in the program’s overall schedule in the event that there are delays or unforeseen problems. In July 2015, we found that the SLS program’s internal goal for launch readiness for EM-1 had slipped from December 2017 to July 2018. This reduced the program’s schedule reserve from eleven months to four months. In May 2016, the SLS program further delayed its internal goal for launch readiness from July 2018 to September 2018, reducing program schedule reserve to two months. EGS’s internal goal for launch readiness for EM-1 is September 2018, meaning the program currently has two months of funded schedule reserve. The SLS program has made solid progress in resolving some technical issues and maturing the SLS design, but the program’s management of known risks as well as the program’s upcoming integration and test phase puts pressure on the program’s reduced cost and schedule reserves. This pressure threatens the program’s committed November 2018 launch readiness goal. The SLS program has made progress in resolving some technical issues that we previously reported on. For example, prime contractor officials for the core stage stated that they had implemented all corrective actions necessary to repair a problem with the stage’s tooling. Further, the program met its design goals by demonstrating the program’s design was stable enough to warrant continuation. As the program continues with final design and fabrication, the program faces known risks. Such risks are not unusual for large-scale programs, especially human exploration programs, but the program’s management of these risks may increase pressure on reduced cost and schedule reserves. For example, the SLS program has not positioned itself well to provide accurate assessments of progress with the core stage—including forecasting impending schedule delays, cost overruns, and estimates of anticipated costs at completion—because, at the time of our review, NASA did not have a performance measurement baseline necessary to support full earned value management reporting on the core stage contract. Finally, unforeseen technical challenges are likely to arise once the program reaches its next phase, final integration for SLS and integration of SLS with its related Orion and EGS human spaceflight programs that will likely place further pressure on cost and schedule reserves. The SLS program has made solid technical progress developing its primary elements, but at times, the progress has had associated cost increases or schedule delays. Examples of this development progress— and the unexpected difficulties encountered achieving that progress— include the following: Core stage. In November 2015, prime contractor officials for the core stage stated that they had implemented all corrective actions necessary to repair a subcontractor’s improper installation of the welding tool used to manufacture the 212-foot-tall stage. These actions were necessary because, as we reported in July 2015, NASA officials told us that they would have prevented production of the core stage. As we reported in March 2016, identifying and implementing the corrective actions was the major contributor to a decrease in the program’s schedule reserves from 11 months to 4 months. In addition to resolving the tooling’s misalignment, the SLS program is making progress with fabricating test articles for core stage component testing, constructing new test stands where those components will be subjected to structural testing, and modifying an existing test stand to support hot-fire testing of the assembled core stage. SLS program officials stated that they have also made progress fabricating the EM-1 flight engine section. RS-25 engines. In 2015, the program successfully tested RS-25 developmental engines and in March 2016 performed hot-fire testing of a flight engine. According to NASA officials, these tests demonstrated the engine could be operated under the conditions it will encounter when integrated into SLS. The program also began production of the new engine controller, which directs the RS-25 engines during flight. The contractor, however, is forecasting a potential cost overrun of $113 million on the engine contract, largely due to overruns stemming from developing the controller. According to NASA officials, however, the potential overrun has not affected the overall program cost or schedule. The factors that contributed to the overrun include higher than expected parts costs, resolving anomalies discovered in developmental test, and increasing staffing levels at the subcontractor to meet schedule demands. NASA officials indicated that the controller design has been tested in development and the controller’s qualification testing is front-loaded to drive out problems early in the test sequence; however, the new controller will not complete all testing before engine deliveries begin. According to NASA, if that testing uncovers the need for modifications to the controller, engines already delivered may have to be brought back from the flight line so that modifications can be implemented. Solid Rocket Boosters. The program completed the first qualification test of a fully assembled booster in March 2015. Prior X-ray examination of a booster segment had revealed the presence of unexpected unbonds between the solid rocket propellant, the propellant liner, and the new asbestos-free insulation of the solid rocket boosters that could have potentially caused an explosion. Resolving the unbond issue contributed to a delay of 20 months in full-scale qualification testing and, according to NASA officials, the contractor’s forecast of a potential $129 million cost variance on the contract did not affect the overall program cost or schedule. The program is planning to complete a second qualification test of a fully assembled booster sometime between May and July 2016, which NASA officials anticipate will further confirm resolution of the unbond issue. ICPS. In October 2015, the SLS program completed work on the test version of the ICPS. Additionally, in December 2015 the SLS program began construction of the ICPS liquid oxygen tank, which will provide liquid oxygen to help power the ICPS. In addition, the program as a whole met best acquisition practices design goals by releasing approximately 92 percent of design drawings for the program-level Critical Design Review (CDR) in July 2015. Because the CDR is the time in a project’s life cycle when the integrity of a project’s design and its ability to meet mission requirements are assessed, it is important that a project’s design is stable enough to warrant continuation with design and fabrication, which is evidenced by release of 90 percent of design drawings at CDR. A stable design allows projects to “freeze” the design and minimize changes prior to beginning the fabrication of hardware. It also helps to avoid re-engineering and rework efforts due to design changes that can be costly to the project in terms of time and funding. As the program continues with final design and fabrication, the program faces known risks. Such risks are not unusual for large-scale programs, especially human exploration programs which are inherently complex and difficult. The program’s management of these risks, however, may increase pressure on already reduced cost and schedule reserves. Although the program is making progress resolving some technical challenges with the core stage, the core stage development schedule remains aggressive and any additional delays will threaten the SLS program’s readiness for its internal goal of launch readiness by September 2018. As of May 2016, the core stage development effort had approximately 50 days of schedule margin—or time within the schedule where activities can be delayed before affecting a key milestone, which for the core stage is delivery to Kennedy Space Center to begin integrated operations with the Orion and EGS programs. Figure 4 shows the approximately 50 days of core stage schedule margin as well as the 2 months of SLS program schedule reserve. In addition, because the core stage is the SLS program’s critical path— the path of longest duration through the sequence of activities that determines the program’s earliest completion date—any delay in its development reduces schedule reserve for the whole program. And with only 2 months of schedule reserve remaining between the program’s internal goal and committed launch readiness date of November 2018, any reduction in program reserves threatens the committed launch readiness date. As of April 2016, the SLS program was tracking core stage risks, including late component delivery and concerns about application of the thermal protection system that provides heat shielding, which could require the program to use some of the core stage’s margin. Further, the SLS Standing Review Board—an independent NASA team responsible for reviewing SLS at each major program milestone—found in a 2015 report that it was unlikely the core stage would be able to support the SLS program’s committed date for launch readiness. The Board cited several factors for its finding, including a steep learning curve for the handling and alignment of such a large the potential for human access issues to avionics and propulsion plumbing once the stage is assembled, and that the green run test—the culminating test of core stage development where the actual EM-1 core stage flight article will be integrated with the cluster of four RS-25 engines and fired for 500 seconds under simulated flight conditions—carries risks because it is the first time the four RS-25 engines cluster will be fired, the first time the integrated engine and core stage auxiliary power units will be tested in flight-like conditions, and the first time flight and ground software will be used in an integrated flight vehicle. Green run test activities are currently scheduled to begin in October 2017. Boeing and SLS program officials stated that they are working to establish additional margin within the core stage schedule, but whether the core stage stays on schedule is largely dependent on the success of the green run test. Boeing officials told us that they originally had margin in their schedule for a second green run test if needed, but that it was removed due to the tight schedule. NASA officials acknowledged that this schedule existed; however, they also stated that the contingency test was considered “unauthorized work” for the contractor and the program baseline only calls for one test. Further, NASA officials stated that if the test is not successful, then a re-test may have to occur. Additionally they stated that under current plans, any time required to conduct a re-test would have to come from program schedule margin or reserve. As a result, if the program uncovers unexpected performance issues during green run testing, maintaining the core stage schedule—and thus the program schedule—may prove difficult. The SLS program has also not positioned itself well to provide accurate assessments of progress with the core stage because it has never had a performance measurement baseline for the core stage that is necessary to support full earned value management reporting. Earned value, or the planned cost of completed work and work in progress, can provide accurate assessments of project progress, produce early warning signs of impending schedule delays and cost overruns, and provide unbiased estimates of anticipated costs at completion. The use of earned value management, which integrates the project scope of work with cost, schedule, and performance elements for optimum project planning and control, is advocated by both GAO’s best practices for cost estimating and NASA’s own guidance. According to a SLS program official, when the program and contractor conducted its integrated baseline review—a joint assessment of the performance measurement baseline by the government and contractor—the program realized the contractor’s plans assumed synergies between the core stage and exploration upper stage efforts that would produce cost savings for the contractor but NASA did not have the funding to begin this work under the same time frames identified by the contractor. A SLS program official told us that NASA asked Boeing to start replanning activities with a proposal that removed the exploration upper stage development from this contract action. In May 2016, NASA and Boeing signed the contract replan—with a cost increase of approximately $1 billion, from about $4.2 billion to about $5.2 billion. However, according to program officials it will probably be summer 2016 before the program receives contractor earned value management data derived from the new performance measurement baseline—some 4.5 years after contract award. Without this information, the program has been in a poor position to understand the extent to which technical challenges with the core stage are having schedule implications or the extent to which they may require reaching into the program’s cost reserves. The latter is of concern because as we found in July 2015, NASA maintains low cost reserves for this program—about $50 million per year—because program officials stated it has been necessary to sustain a flat funding profile for SLS as compared to other programs. Further, at SLS’s KDP-C, NASA approved the program to proceed with cost reserves of less than 2 percent leading to launch readiness, even though requirements for Marshall Space Flight Center—the NASA center with responsibility for the SLS program—indicate that standard cost reserves for launch vehicle programs should be 20 percent at KDP-C. In addition to cost and schedule pressures stemming from the core stage, development of the flight software—the software that controls the first phase of SLS flight from liftoff through booster separation and up to main engine cut off—may require more time than the SLS program anticipates because the program made a decision to defer its most rigorous testing until software development nears completion. SLS software developers have been testing flight software at the end of each of the first five primary SLS software releases, with the scope of testing in each release isolated to the set of requirements for that respective release. They plan to perform the most rigorous testing of the software when the development reaches the release that will be used for flight qualification testing, beginning in March 2016, which will include testing against the most comprehensive set of requirements at that point. The deferral of the most rigorous testing until the flight qualification release, however, means that the program’s understanding of the defects to this point may not be as complete as it believes. This may, in turn, delay completion of software development while the program takes the time necessary to resolve defects. As we found in a September 2015 report assessing a Veterans Benefits Administration software-based processing system, successful system testing includes appropriately identifying and handling defects that are discovered during testing. In addition, we found that outstanding defects can delay the release of functionality to end users, denying them the benefit of features. The program has allotted one future contingency release at the end of the software effort for defect repairs, but delaying the discovery of defects increases the risk that potential problems will remain undiscovered until the point when few cost or schedule reserves are available to correct deficiencies. Even if the development phase does not consume any additional cost and schedule reserves, the SLS program’s EM-1 integration and test phase may require those resources. Our prior work has shown that this period often reveals unforeseen challenges leading to cost growth and schedule delays. Likewise, although superseded through revision, NASA program management guidance from 2010 states that integration and testing are among the periods of peak spending, when schedule delays are most costly, and that programs should maintain sufficient reserves to address issues encountered during that time, and unknown risks can be managed only by maintaining sufficient reserves. Compounding this already risky time period is that the threat to SLS program reserves is two-fold because SLS EM-1 launch readiness involves in essence two integration efforts. The first integration effort is to assemble SLS as a launch vehicle and the second is a cross-program integration effort, which means integrating SLS, Orion, and EGS to achieve launch readiness in 2018. Integrated launch readiness for EM-1 is dependent on the success of the individual SLS, Orion, and EGS integration efforts. If delays materialize during individual systems integration and testing, for example, there could be a cascading effect of cross-program problems. Booster component shelf life provides a good illustration of this point. According to program officials, there is a limit on the amount of time the SLS boosters may remain in a stacked configuration that, if exceeded, would necessitate destacking and replacement of limited-life items. Program officials told us that NASA will review these limited-life items prior to stacking the integrated vehicle, but if launch is delayed longer than limited-life time frames allow, NASA would have to disassemble SLS from Orion back in the Vehicle Assembly Building. Such an effort could have broad cost and schedule impacts across the three programs. NASA’s Human Exploration and Operations Mission Directorate, which oversees development of the SLS, EGS, and Orion programs, plans to conduct a “build-to-synchronization” review in summer 2016 to demonstrate that the integrated launch vehicle, crew vehicle, and ground systems will perform as expected to meet EM-1 objectives. Human Exploration and Operations Mission Directorate officials told us that there is no existing NASA guidance to direct what the build-to-synchronization review should entail, but that they are tailoring requirements, with agency leadership concurrence, from NASA program management guidance for critical design review. According to these officials, the review will serve essentially as an EM-1 integration critical design review for the programs. According to NASA program management requirements, a critical design review for a NASA program would not only evaluate the integrated design, but also evaluate whether it meets mission requirements with appropriate margins and acceptable risk within cost and schedule constraints. As of March 2016, officials leading the planning efforts for the build-to-synchronization review told us that they were currently working on developing the terms of reference—which include review objectives and success criteria—but that they anticipate only limited discussion of cost and schedule because the review will focus first and foremost on the hardware and software design maturity of the three programs. Understanding the technical scope required for EM-1 integrated readiness, however, goes hand-in-hand with knowledge about how much money and time the individual programs will require to achieve that readiness. By foregoing a re-evaluation of cost and schedule reserves at the time it assesses technical scope for EM-1, especially in light of known pressures on the SLS program’s reserves, NASA risks missing an opportunity to reevaluate whether sufficient resources are available to respond to unforeseen challenges during the integration and testing phase. Beyond EM-1, the SLS program continues to face technical as well as cost and schedule risks. For example, for Exploration Mission 2 (EM-2), the program will be transitioning from the ICPS in-space propulsion element to an exploration upper stage providing both ascent performance and in-space capability. NASA had intended to use the ICPS for EM-2, which is planned to launch a crewed Orion vehicle beyond the moon to further test performance. However, the ICPS is not certified to support crewed flight and NASA estimated it would have to spend at least $150 million on that effort to fly a crewed mission. The Explanatory Statement to the Consolidated Appropriations Act, 2016, while not law, prohibited the use of NASA funds to human-rate the ICPS. In addition, as part of the fiscal year 2016 NASA Exploration appropriation, Congress provided that no less than $85 million of the appropriations should be used for the development of a new exploration upper stage necessary to build the Block IB vehicle for deployment on EM-2. NASA officials told us that the agency intends to have the exploration upper stage complete for EM-2. They also stated that they are currently developing a test plan, which includes examining the risk of performing only ground testing of the exploration upper stage because current plans do not allow for a separate flight test of the stage prior to EM-2. The EGS program is maturing selected systems, but the program is encountering technical challenges that require both time and money to fix. Further, the program has reduced cost and schedule reserves remaining to address risks if they come to fruition. This pressure threatens the program’s committed November 2018 launch readiness goal. Program management has identified the Vehicle Assembly Building and Mobile Launcher as projects along the critical path and software as a high risk component of the EGS program. All three of these projects have experienced delays and the Vehicle Assembly Building and Mobile Launcher have no schedule margin remaining to overcome any future technical challenges. As a result, any future delays would have to be accommodated by using the overall program’s schedule reserve. The program’s schedule reserve, however, has been reduced over time and now has 2 months of reserve remaining. Further, the program is operating with reduced cost reserves to address any future construction and software risks. These reserves will likely be tested further once the program begins integration with SLS and Orion, as delays in any one program can have a cascading effect. The Vehicle Assembly Building was built in 1966 as a facility to assemble the Apollo program’s Saturn V moon rocket, and part of the building is being refurbished by the EGS program to accommodate SLS and Orion. Updating the building is a large undertaking as it includes removing about 150 miles of Apollo-era cabling, improving the elevators, upgrading cranes, and incorporating fire safety improvements. EGS officials stated that the age of the building adds even more challenges, such as dealing with outdated building drawings and uncertain field conditions. The most significant of the Vehicle Assembly Building projects is the fabrication and installation of 10 new platforms which will allow access to the integrated SLS and Orion vehicles during final assembly. See figure 5 for a photograph of the Vehicle Assembly Building and an illustration of the building’s platforms. Complications with the Vehicle Assembly Building’s platform design and installation have required an additional $16 million to resolve—funding which the EGS program used from program reserves and the launch pad project, a project that has development remaining. Additionally, the project has exhausted its schedule margin, and any additional delays would have to be addressed through the use of program level schedule reserves. During testing, NASA observed that the test platform could not roll out properly and the program was forced to modify the design of the platforms midway through construction. Resolution of these design issues involved modifying key mechanical components and installing shims to properly align the platform during rollout. Program officials said that this issue has been addressed and is being implemented on all nine subsequent platforms. NASA is prepared to install additional shimming during platform installation if necessary. NASA’s interim assessment of the design contractor for the platforms highlighted numerous quality issues during design; however, NASA officials ultimately found the design product at an acceptable level of quality with cost and schedule requirements having been met. Additionally, in December 2015, the first platform was installed in the Vehicle Assembly Building, but was removed shortly thereafter because of an installation issue. According to agency officials, the platform “flexed” slightly when it was lifted via crane for installation due to the weight of the platform in relation to the lifting points. This flexing kept the platform from fitting as designed on the bracket that allows the platforms to be moved to different elevations. The program has designed and fabricated an installation tool to prevent the platform from flexing when it is lifted for installation. EGS officials estimate that, if platform design challenges continue, they could delay the completion of the Vehicle Assembly Building by up to 3 months, which would affect the EGS program’s schedule overall. For example, construction on the building’s platforms is slated to end immediately before the Mobile Launcher is moved into the Vehicle Assembly Building; officials said there is no margin for additional delays on the building if it is to be ready for the Mobile Launcher in time. If additional delays materialize with the Vehicle Assembly Building, the program would need to reduce its overall schedule reserve. The Mobile Launcher was originally developed as part of the Constellation program, but was never used because of the program’s cancellation in 2010. After the cancellation, EGS began modifying the Mobile Launcher to support what is now SLS. The EGS program is modifying the Mobile Launcher to support the assembly, testing, prelaunch check-out and servicing of the SLS rocket, as well as to transfer SLS and Orion to the launch pad and provide the platform from which they will launch. According to EGS officials, the Mobile Launcher is the most complex EGS component because it contains more than 900 pieces of ground support equipment needed to support SLS and Orion. Ground support equipment includes subsystems for propellant and gases, electronic control systems, communication systems, and access platforms. Figure 6 is a photograph of the Mobile Launcher. The EGS program has experienced delays and design challenges with the Mobile Launcher and has no project-level schedule margin remaining in order to meet the program’s internal goals for operations and launch readiness. Any additional delays would have to be addressed through the use of program level schedule reserves. The EGS program has completed all major structural changes to the Mobile Launcher, such as adding reinforcements to the Mobile Launcher’s structure to accommodate SLS height and weight, but the program must still complete the design and installation of the ground support equipment and the nine umbilicals that connect the Mobile Launcher directly to the SLS and Orion. The program has experienced design challenges and late hardware deliveries with two of these umbilicals: the ICPS umbilical, which supplies power, fuel, and cooling between the SLS upper stage and the Mobile Launcher, and the tail service mast umbilical, which provides liquid hydrogen and oxygen to SLS during launch. Further, there have been ground support equipment and umbilical design changes both during and after the Mobile Launcher’s design phase because of vehicle requirement changes from SLS and Orion. EGS used nearly 22 percent of its schedule margin to accommodate these changes. Additionally, requirement changes during and after ground support equipment subsystems’ design have led to the Mobile Launcher’s ground support equipment being designed concurrently with its installation. The program has identified a program risk that conducting these activities concurrently could lead to a potential cost increase of up to $10 million and schedule delays of up to 8 months. The Mobile Launcher project plans to begin its project-level verification and validation before installation of the ground support equipment and umbilicals are complete because the project has no schedule margin remaining. Officials acknowledged that conducting the mobile launcher’s verification and validation concurrent with ground support equipment systems and umbilicals installation increases risk because of uncertainties regarding how systems not yet installed may affect the systems already installed. EGS officials indicated that these concurrent installations and verification and validation meets all program test objectives and enables the Mobile Launcher effort to stay on schedule to support the program’s internal launch readiness date. EGS’s software development efforts—Spaceport Command and Control System (SCCS) and Ground Flight Application Software (GFAS)—are behind schedule as compared to program plans. The development efforts face challenges that include the need for requirements-related information from the SLS and Orion programs. EGS is developing these two software systems concurrently—SCCS is to operate and monitor ground equipment needed to launch and communicate with the integrated SLS and Orion vehicles, and GFAS is to interface with flight systems and ground crews. EGS software was immature at the program’s Critical Design Review, and EGS’s Standing Review Board considers the program’s software development effort the highest risk area. The Standing Review Board found in February 2016 that the SCCS and GFAS developments are currently underperforming, are understaffed, and are waiting on requirements definition from the two flight element programs. SCCS and GFAS’s completion are dependent on the SLS and Orion also finishing work on schedule; however, because SCCS and GFAS are among the last EGS activities scheduled to finish prior to integrated operations, delays in their completion could force a delay in the program’s committed November 2018 launch readiness date. SCCS Architecture: SCCS development is behind its planned software release schedule. Program officials attributed the delays, in part, to requirements maturing late from SLS and Orion. For example, according to EGS officials, there were initially supposed to be two content drops, wherein additional functionality is added, for the last two versions of SCCS. However, as of the program’s critical design review in late 2015, the two drops had evolved into six content drops. Program officials stated that the evolution from two drops to six enables content to be released in an as-needed phased approach to meet stakeholders’ needs and utilize resources in a more efficient manner. In March 2016, the NASA Office of the Inspector General reported that SCCS is more than a year behind schedule and significantly over cost, and that, because of cost and timing pressures, several planned software capabilities have been deferred, including the ability to automatically detect the root cause of specific equipment and system failures. The Office of Inspector General concluded that these issues largely result from unanticipated complexity in the way NASA has approached SCCS’s development. Likewise, program officials told us that developers initially expected ground systems, Orion, and SLS to require a total of 300,000 compact unique identifiers, or information fields. However, these officials said that because EGS is developing software as Orion and SLS are developed, complete information on how many information fields were necessary for each program was unavailable at the beginning of the development effort. SCCS officials have identified a risk that there may be a need for more than 300,000 total information fields, which could degrade the software system’s performance and result in cost and schedule overruns. As the program’s Standing Review Board concluded, much of EGS’s software development work is heavily dependent on the final requirements of the SLS and Orion programs, both of which are still in development. Program officials indicated that, as all three programs have developed and EGS has received more information about the requirements of SLS, Orion, and ground systems, SCCS’s complexity has increased. To address the added complexity, the EGS program increased its workforce, but the overall schedule is challenged by hiring difficulties in a highly competitive environment. The EGS program is using the same developers to develop content for multiple phased deliveries, and the next content drop has been threatened by the developers’ delayed transition from prior drops. GFAS Application Software: GFAS development is facing challenges because necessary operational requirements from SLS and Orion are not yet available. GFAS officials told us that they were optimistic in their planning regarding the availability of requirements from SLS and Orion to support software development. For example, EGS officials said that they expected more mature information about operational requirements to come out of the Orion and SLS CDRs than what they received. In September 2015, after EGS officials did not receive early information as they had anticipated, the program conducted a schedule replan and said they planned to hire more staff to reduce the risk to the program. GFAS is currently planning on delivering their last content drop in February 2018, after the program has begun integrated operations with SLS and Orion. Figure 7 illustrates the GFAS content drop schedule with EGS’s schedule milestones. The EGS program has identified two program risks that development of GFAS could be delayed by a combined maximum of up to 9 months and costs could increase by up to $3.2 million combined because it is dependent in part on SCCS development progress. According to the program’s Standing Review Board, the risk is that the necessary software will not be available when needed to meet EGS critical milestones and could affect the agency’s November 2018 launch readiness commitment date. Overall, the EGS program is operating with limited cost reserves to address any future construction and software risks. The EGS program is operating in fiscal year 2016 with cost reserves of about $13 million, or about 3 percent of its fiscal year 2016 budget. Program budget documents indicate that the program expects its cost reserve posture to improve to 13 percent and 9 percent in fiscal years 2017 and 2018 and to level out at around 5 percent in subsequent years. Kennedy Space Center, which manages the EGS program, does not have guidance for cost reserves. However, other NASA centers, such as the Goddard Space Flight Center—the NASA center with responsibility for managing other complex NASA programs such as the James Webb Space Telescope—have requirements for the level of both cost and schedule reserves that projects must have in place at KDP-C. At KDP-C, Goddard flight projects are required to have cost reserves of 25 percent or more through operational readiness. At EGS’s KDP-C, however, the program had cost reserves of only 4 percent leading to launch readiness. According to EGS’s Standing Review Board in 2016, the remaining cost risks to EGS are greater than the program’s current reserve balance. Our analysis of the maximum potential impact of Mobile Launcher and Vehicle Assembly Building cost risks on EGS cost reserves support this assessment. For example, based on the program’s February 2016 risk assessment, the EGS program could see maximum cost increases of $10 million for the Mobile Launcher and $11 million for the Vehicle Assembly Building, which is almost double the program’s fiscal year 2016 reserve. Although these cost increases may not occur in only one fiscal year and could be less than the maximum value, they could still impact the program if the planned reserves are either not available as expected or are not sufficient to cover needs. The EGS program is also operating with reduced schedule reserves to address future construction and software issues. At the time NASA established EGS’s agency baseline commitment, the program had 5 months of funded schedule reserve between its internal planning date (June 2018) and its committed launch readiness date (November 2018). The program is now internally planning to a launch readiness date of September 2018, which reduces the program’s schedule reserve to 2 months. However, the EGS program must be ready well in advance of this launch readiness date in order to integrate SLS and Orion at the Kennedy Space Center and the program plans to be ready to begin integrated operations with SLS in January 2018. EGS has 3 months of margin before the start of integrated operations, and 1 additional month of margin before its internal goal for launch. See figure 8 for a timeline of EGS’s lifecycle relative to SLS for EM-1. Moving forward, relying on the critical path to determine available reserves may prove problematic because the program’s scheduling practices are fairly limited. The EGS program identifies its critical path as including the Vehicle Assembly Building and the Mobile Launcher in program quarterly management reports, but we were not able to replicate this critical path. Our analysis of the EGS critical path identified inconsistencies between the critical path identified by the software used to create and maintain the program’s integrated master schedule and the critical path called out in the program’s quarterly management reports. Our best practices for scheduling indicate that a program’s integrated master schedule should identify the program’s critical path rather than critical activities being selectively chosen based on what management has determined to be important. Establishing a valid critical path is necessary for examining the effects of any activities slipping along this path. Based on our limited review, the critical path in the program’s integrated master schedule does not match the critical path in the program’s quarterly management reports. EGS program management acknowledged that the two paths did not match, and indicated that they intentionally do not rely solely on the scheduling software’s generated critical path because it includes non-EGS development activities, such as SLS and Orion flight hardware deliveries. We plan to further research the inconsistencies we identified as part of planned future work on NASA’s human exploration systems. Integration of EGS with the SLS and Orion programs will be reviewed by the Human Exploration and Operations Mission Directorate at the build- to-synchronization review in summer 2016. As with the SLS program, the same holds true for EGS that integrated flight readiness for EM-1 is dependent on the technical and programmatic stability of all three human spaceflight programs—EGS, SLS and Orion. Further, threats to the margin and schedule reserve for EGS can occur from delays within the program or delays within the Orion and SLS programs. In particular, if the SLS core stage delivery date to Kennedy Space Center slips beyond the March 2018 date depicted in the above figure, NASA will have less time for integrated operations and that will ultimately threaten the launch readiness date of November 2018. NASA established the SLS and EGS programs to support deep space exploration by humans, but the ability to launch its first exploration mission with these programs by the committed date of November 2018 is threatened. In some cases, the threat comes from technical challenges that are not unusual for large-scale projects, but may take more time and money than the program has reserves to address. In other cases, NASA’s approach to dealing with the known risks is exacerbating the challenges. For example, in some cases the SLS program has not positioned itself well to accurately forecast and proactively manage potential schedule delays and cost overruns, which in turn, may ultimately lead to cost and schedule growth that could stretch the program beyond its committed baseline. An opportunity is nearing in NASA’s upcoming build-to- synchronization process to not only determine whether the integrated launch vehicle, crew vehicle, and ground systems will perform as expected to meet EM-1 objectives, but to also revisit whether cost and schedule reserves are sufficient. Given the mission of the EM-1 test flight, NASA does not have to meet a specific schedule window for its launch date as it often does with planetary missions. As a result, NASA is in the position of being able to make an informed decision about what is a realistic launch readiness date. By not re-evaluating the cost and schedule reserves, both programs may continue to make decisions that result in reduced knowledge to meet a schedule that is not realistic. Until such a re-evaluation occurs, the American public and Congress, who are the beneficiaries of NASA’s technological advances, will not have a clear picture of the time and money needed to support these efforts. In order to ensure available cost and schedule margins are sufficient to meet the synchronized goals for launch readiness and related activities, we recommend the NASA administrator direct the Human Exploration and Operations Mission Directorate as it finalizes its schedule and plans for EM-1 during the planned build-to-synchronization review to re-evaluate SLS and EGS cost and schedule reserves based on results of the integrated design review in order take advantage of all available time resources and maximize the benefit of available cost reserves, and to verify that the November 2018 launch readiness date remains feasible. We provided a draft of this report to NASA for review and comment. Its written comments are reprinted in appendix IV of this report. NASA concurred with our recommendation to re-evaluate SLS and EGS cost and schedule reserves based on results of the build-to- synchronization review, but stated that further direction from the Administrator to the program is not necessary, as this activity is already underway. We are encouraged that since our discussions with the program regarding the scope of the build-to-synchronization review, and providing NASA with a draft of this report for comment, the agency has incorporated plans to address the processes and capabilities in place to continue managing the enterprise within cost and schedule constraints, including available margins, as part of the build-to-synchronization review and that the SLS management agreement to EM-1 is being updated to align with program and enterprise execution plans. To further satisfy our recommendation’s intent, we anticipate that NASA’s actions could encompass a full examination of the integrated schedule for the programs, to help ensure that an individual program does not anticipate using limited reserves to meet the planned launch readiness date, if the November 2018 date is not feasible for all the programs. NASA stated that the results of its build-to-synchronization review will be reported to the NASA Program Management Council by November 30, 2016. NASA also provided technical comments on the draft report, which we incorporated as appropriate. We are sending copies of this report to NASA’s Administrator and to appropriate congressional committees. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report , please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. According to Exploration Ground Systems (EGS) officials, the program does not track how EGS investments could benefit users beyond the Space Launch System (SLS) and Orion, but we found that the majority of EGS funds are being used to develop major components that will be exclusively used by SLS and Orion or require some modification to be used by another user. We found that EGS components fall into three categories: components that could be used for users beyond SLS and Orion with no modifications, providing they are not in use by SLS and Orion; components that could be used with some modification; and components that are solely for SLS and Orion. For example, the Mobile Launcher has nine umbilicals and, according to EGS officials, over 900 pieces of ground support equipment to support SLS and Orion. According to National Aeronautics and Space Administration (NASA) officials, while the steel structure and platform of the Mobile Launcher could be used for another user, that user would have to meet weight limits of the structure and would need to design and install entirely new specialized equipment. Five components—among them, the Vehicle Assembly Building, Crawler-Transporter, and the Multi-Payload Processing Facility (part of Spacecraft Offline Processing)—have received funding from the 21st Century Space Launch Complex Initiative, which focuses on modernizing the infrastructure to support multiple users at Kennedy, in addition to EGS funding. These components can, with some or no modifications, be used by other users. The Crawler- Transporter, for example, has been upgraded by EGS in order to support the combined weight of the Mobile Launcher, SLS, and Orion, but according to EGS officials could be used by any user to transport equipment as long as the equipment was within the Crawler-Transporter’s carrying capacity. The majority of EGS funds obligated to date are to develop components that require some modifications or will be exclusively used by SLS and Orion. See table 2 for allocation of Ground Systems Development and Operations (GSDO) funding between EGS and 21st Century Space Launch Complex Initiative. As seen in the above table and its accompanying notes, from fiscal year 2012, when the EGS program started, to fiscal year 2015, the EGS program obligated $1,495.4 million to develop components for SLS and Orion. In the same years, $49.8 million from the 21st Century Space Launch Complex Initiative was used for some EGS components that may benefit users beyond SLS and Orion. To assess the extent to which the Space Launch System (SLS) program made progress in meeting cost and schedule commitments, we compared current program status with National Aeronautics and Space Administration’s (NASA) cost and schedule baselines for executing Exploration Mission 1 (EM-1) in 2018. We reviewed top SLS program and element-level risks as identified by NASA; analyzed the results of the SLS July 2015 critical design review to determine what software and hardware efforts present the highest risk to program cost and schedule; and reviewed monthly earned value management reports to identify the largest impacts on cost and schedule. In addition, we assessed SLS design production maturity against established knowledge-based, best practice standards. We compared the status of flight software development efforts and progress against NASA’s planned release schedule and reviewed the metrics NASA is using to assess software development status. During the course of our review, we examined other program documents that included program plans, quarterly program status review reports, assessments of SLS preliminary and critical design reviews by the NASA Standing Review Board that reviewed the program’s status at preliminary and critical design review independent from the program; and an assessment of the flight software development by a NASA Independent Verification and Validation team that reviewed software development status independent from the SLS program. We met with the SLS program, element-level and flight software officials at Marshall Space Flight Center in Huntsville, Ala.; representatives from the core stage contractor, Boeing, in Huntsville, Ala.; and officials from the Standing Review Board and the Independent Verification and Validation teams, which are composed of members from various NASA locations. To assess the extent to which the Exploration Ground Systems (EGS) program has made progress in completing modifications to key components and ground support equipment at Kennedy Space Center, we identified EGS’s major components by reviewing program plans, critical design review documents, quarterly program status review documents, and budget materials. We identified the Vehicle Assembly Building, Mobile Launcher, and software as key construction and development efforts for our review because they are among the top program risks or the most expensive EGS projects. We observed EGS components during a site visit to Kennedy Space Center and discussed modification of the components with NASA officials. To evaluate the progress made in preparing these components and software to support the EM-1 test flight, we reviewed program plans and compared them to program status to assess whether EGS components and software were progressing as expected, critical design review documents to determine design maturity, quarterly program status reviews to identify risks, budget information to assess development costs, and contractor progress reports to identify any issues contractors faced that could impact cost and schedule. We also reviewed NASA’s Standing Review Board assessment from EGS’s preliminary and critical design reviews. We also evaluated the program’s integrated master schedule against the GAO’s best acquisition practices for scheduling in order to assess the validity of the EGS program’s critical path. Additionally, to determine the extent to which major ground system components at Kennedy Space Center directly support the SLS and Orion programs, we reviewed NASA budget and accounting data and interviewed agency officials. We conducted this performance audit from September 2015 to July 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Cristina T. Chaplain (202) 512-4841 or [email protected]. In addition to the contact named above, Molly W. Traci (Assistant Director), Michael Armes, Matt Bader, Nabajyoti Barkakati, John Bauckman, Erin Cohen, Juana Collymore, Tana Davis, Juli Digate, Jennifer Echard, Laura Greifner, Jason Lee, Sylvia Schatz, Roxanna Sun, and John S. Warren, Jr. made key contributions to this report.
NASA is in the midst of developing systems needed to support deep-space exploration by humans. SLS will be NASA's first exploration-class launch vehicle in over 40 years to propel astronauts and cargo beyond low-Earth orbit. The EGS program is developing systems and infrastructure to support both SLS and the crew capsule, known as Orion. Together, the first planned SLS flight, the ground systems for that effort, and the first two Orion flights are estimated to cost almost $23 billion. In July 2015, GAO found that SLS's limited cost and schedule reserves were placing the program at increased risk of being unable to deliver the launch vehicle on time and within budget. The House Committee on Appropriations report accompanying H.R. 2578 included a provision for GAO to assess the acquisition progress of the SLS, EGS, and Orion programs. This report assesses the extent to which (1) SLS has made progress meeting cost and schedule commitments, and (2) EGS has made progress in completing modifications to key facilities and equipment. To do this work, GAO examined the results of design reviews, contractor data, and other relevant program documentation, and interviewed relevant officials. GAO plans to report separately on the Orion program in July 2016. The National Aeronautics and Space Administration's (NASA) new launch vehicle, the Space Launch System (SLS), has resolved some technical issues and matured its design since GAO's July 2015 report, but pressure remains on the program's limited cost and schedule reserves. This pressure, in turn, threatens its committed November 2018 launch readiness goal. The program has made progress in resolving some technical issues—for example, a major alignment problem with the welding tool for the core stage (SLS's structural backbone and fuel tank) was corrected. Nonetheless, SLS development faces known risks moving forward. While such risks are not unusual for large-scale programs, the program's approach to managing them may increase pressure on the limited reserves. For example, the SLS program has not positioned itself well to provide accurate assessments of core stage progress—including forecasting impending schedule delays, cost overruns, and anticipated costs at completion—because at the time of our review it did not anticipate having the baseline to support full reporting on the core stage contract until summer 2016—some 4.5 years after NASA awarded the contract. Further, unforeseen technical challenges are likely to arise once the program reaches its next phase, final integration for SLS and integration of SLS with its related Orion and Exploration Ground Systems (EGS) human spaceflight programs. Any such unexpected challenges are likely to place further pressure on SLS cost and schedule reserves. The figure below shows key events in SLS and EGS launch readiness schedules. The EGS program is making progress in modifying selected facilities and equipment to support SLS and Orion, but is encountering technical challenges that require time and money to address. Like SLS, the program has reduced cost and schedule reserves, which threatens its committed November 2018 launch readiness goal. Modifications to two main components—the Vehicle Assembly Building, where the SLS is assembled, and the Mobile Launcher, the vehicle used to bring SLS to the launch pad—have already cost more and taken longer than expected as has development of EGS software. In June 2016, after all the systems necessary to support the first flight test are expected to have a stable design, NASA plans to start an integrated design review to demonstrate that the integrated systems will perform as expected. NASA guidance indicates that this type of review should also evaluate whether mission requirements are being met with acceptable risk within cost and schedule constraints. NASA officials stated that this review will have limited discussion of cost and schedule. Proceeding ahead without reassessing resources, however, could result in the EGS or SLS program exhausting limited resources to maintain pace toward an optimistic November 2018 launch readiness date. GAO recommends that NASA should reevaluate cost and schedule reserves as part of its integrated design review for the first flight test in order to maximize all remaining cost and schedule reserves. NASA concurred with GAO's recommendation.
Temporary limited appointments are appropriate for meeting a range of staffing requirements when an agency expects there will be no permanent need for an employee. Temporary employees can work on a full-time, part- time, seasonal, or intermittent basis. Federal employers are prohibited from using temporary employees to avoid the costs of employee benefits or ceilings on permanent employment levels. Federal employers also cannot use temporary employment as a “tryout” or trial period prior to permanent employment. In addition, federal employers cannot circumvent the competitive examining process by appointing an individual on a temporary basis when that individual is not among the list of qualified applicants certified for permanent appointment. Finally, under OPM regulations, federal employers generally cannot use a temporary appointment to refill positions that were previously filled with such an appointment for an aggregate of 24 months over the preceding 3 years. OPM states that although agencies have the basic authority to make temporary limited appointments, agencies must document the reason for each such appointment in an employee’s official personnel folder. Agencies can use the appointing authority to: (1) fill a short-term position that is not expected to last longer than 1 year; (2) meet an employment need that is scheduled to be terminated within 24 months for such reasons as abolition, reorganization, contracting of the function, anticipated reduction in funding, or completion of a specific project or peak workload; or (3) fill positions temporarily when the positions are expected to be needed for the eventual placement of permanent employees who would otherwise be displaced from other parts of the organization. Various changes in regulation have been made over the years restricting the length of service of temporary limited employees. Beginning in 1938, temporary employees generally could not continue past 30 days unless OPM’s predecessor, the Civil Service Commission, approved the extension. In 1960, the time a temporary appointment could remain in effect was extended, so that appointments could be made for as long as 1 year. In 1984, OPM increased the length of time an appointment could remain in effect, so that agencies could extend a temporary employee’s service for a total of 4 years from the date of initial appointment without OPM’s approval. In response to this change in policy, MSPB reported in 1987 that although the expanded authority was a positive addition to the management tools available to federal managers, such flexibility could lead to poor management practices that result in continuing staffing needs being met with temporary employees because they were easier to hire administratively. Beginning in 1991, several hearings were held before subcommittees of the House Committee on Post Office and Civil Service to receive complaints of temporary employees. The hearings confirmed that federal agencies were retaining employees in an ongoing series of temporary appointments for long periods (8 to 10 years) without benefits or tenure. In a tragic example, a National Park Service employee, James A. Hudson, who had worked in an ongoing series of temporary limited appointments for 8 years, died on July 5, 1993, after suffering a fatal heart attack after working three shifts over a 2-day period during the July 4 weekend. Mr. Hudson, who was a decorated Vietnam War veteran, was a full-time temporary worker whose survivors were not entitled to a pension or government-subsidized health or life insurance benefits. In response to his death, the Congress, as part of the Department of the Interior and Related Agencies Appropriations Act of 1994, gave Mr. Hudson’s widow a lump-sum payment of $38,400, the amount his family would have received as life insurance benefits had he been a permanent federal employee. In 1994, responding to these hearings and information from other sources, OPM revised its regulations governing agencies’ use of temporary appointments by reducing the time limit from a maximum of 4 years to 2 years and made the requirements uniform for temporary appointments in both the competitive and excepted services. For an extension beyond 2 years, agency officials must request and obtain approval from OPM. In fiscal year 2000, 10 agencies—the departments of Agriculture, Commerce, Defense, HHS, the Interior, Justice, State, the Treasury, and VA and FEMA—were the predominant users of temporary limited employees. These agencies also employed 84 percent of all federal employees in that year. Figure 1 shows the percentage of temporary limited employees hired in fiscal year 2000 by the 10 agencies and all other agencies. Over the 10-year period, these 10 agencies accounted for slightly over 90 percent of all temporary limited employees hired governmentwide. Table 1 shows the number and percentage of temporary limited hires that the 10 agencies used each year. The number of temporary limited employees hired governmentwide declined by about 47 percent from fiscal year 1991 to fiscal year 2000. By comparison, CPDF data show that for permanent federal employees, the decline was about 19 percent over the same 10 years. Except for small year-to-year increases in fiscal years 1995, 1997, and 1998, the hiring of temporary limited employees declined annually over these 10 years. Over the 10-year period, the majority of temporary limited employees were full-time hires in white-collar occupations. These employees received some benefits, including annual pay adjustments, overtime pay, and premium pay. Temporary limited employees can work a full-time, part-time, seasonal, or intermittent work schedule. From fiscal years 1991 through 2000, the majority of temporary limited employees were full-time hires. Figure 2 contains data on temporary limited employees hired governmentwide for fiscal years 1991 through 2000 by work schedule. Over the 10-year period, the majority of temporary limited employees hired were in white-collar occupations. White-collar occupations include professional, administrative, technical, and clerical occupations. The remaining occupations were blue collar, comprising the trades, crafts, and manual labor. Blue-collar occupations include foreman and supervisory positions entailing trade, craft, or laboring experience and knowledge as the paramount requirement. For fiscal year 2000, about 65 percent of temporary limited employees belonged to 10 occupational series: (1) the miscellaneous clerk and assistant series; (2) fabric and leather, instrument, machine tool, metalwork, audio visual/television/video, etc. series; (3) miscellaneous administrative and program series; (4) forestry technician series; (5) office automation clerical and assistant series; (6) general education and training series; (7) biological science and technician series; (8) educational and vocational training series; (9) education and training technician series; and (10) park ranger series. Figure 3 shows the percentage of temporary limited employees hired governmentwide by occupation category for the 10-year period. Most of the white-collar occupations were in the general schedule (GS) pay plan, which consists of 15 grades of annual rates of basic pay. Table 2 shows the numbers of white-collar GS temporary limited employees hired governmentwide by grade level. Temporary limited employees receive some rights and benefits but are not entitled to many of the rights and benefits available to permanent federal employees. Temporary limited employees, like permanent employees, receive full salary based on the grade and step of the position they occupy, annual pay adjustments, and overtime and premium pay. They also generally earn annual and sick leave if they work a full-time or part-time schedule. Part-time employees earn annual and sick leave on a prorated basis. Seasonal employees can work full time or part time. Because intermittent employees have no fixed work schedule, they do not earn annual and sick leave. Temporary limited employees are not eligible for military leave or family and medical leave. Retirement and life insurance benefits are not provided to temporary limited employees. These employees cannot participate in the Thrift Savings Plan. To be eligible for health insurance benefits, they must complete 1 year of current continuous employment, excluding any break in service of 5 days or less. Once eligible, they must pay the entire cost of the insurance premium. The government does not contribute toward the cost of health insurance for temporary limited employees as it does for permanent federal employees. Temporary limited employees in the GS pay plan also do not receive within- grade pay increases. However, some blue-collar temporary limited employees are eligible for within-grade pay increases. Temporary limited employees cannot be converted to permanent positions, and the time served in a temporary limited position is not creditable service for federal retirement. According to the results of our survey of the 10 agencies that were the predominant users of temporary limited employees, seasonal work was the primary reason agency officials gave for using such employees. Agency officials’ responses indicate that 37 percent of the temporary limited employees hired in fiscal year 2000 in their agencies were for seasonal work. Those officials’ responses indicate that 20 percent of temporary limited employees were hired in fiscal year 2000 because of peak workload. Overall, 18 percent of the temporary limited employees hired in fiscal year 2000 were students, including students in associate, graduate, or professional degree programs. Figure 4 shows the percentage of temporary limited employees hired in fiscal year 2000 for each reason provided. Figure 5 shows, by reason, the most prevalent occupations identified by our survey of the 10 agencies that were the predominant users of temporary limited employees. The most often reported occupational series for fiscal year 2000 was the office automation clerical and assistance series. We reviewed reports and studies published over the past 15 years that discussed aspects of temporary employment in the public and private sectors. Although studies indicate that some differences exist between the federal government’s use of temporary limited employees and that of the private sector, they also indicate that the reasons federal agencies and private sector firms use temporary employees are generally similar. The primary use for both sectors concern scheduling flexibility for staffing so that employers could use temporary employees in such instances as to fill in for absent regular employees; to fill seasonal needs; and to provide needed assistance at times of unexpected increases in business or to meet fluctuations in workload. Differences between the sectors include reasons that are acceptable uses of temporary employees in the private sector (e.g., to screen/recruit for filling permanent positions and to save on wage and benefit costs) but not allowed under the regulations governing temporary employees in the federal government. Other differences include reasons that are associated with aspects of federal hiring, such as temporarily employing candidates awaiting final security clearances and using temporary help in continuing positions that could not be filled permanently due to budget cuts. In June 1997, an employer study was published based on a survey designed to be representative of employment in private sector establishments with five or more employees in the United States. This study directly addressed why private sector employers used temporary employees and divided the reasons into two categories. The first category consisted of reasons concerning staffing levels, including filling vacancies until regular employees are hired; filling in for absent regular employees who are sick, on vacation, or on filling seasonal needs; providing needed assistance during peak-time hours of the day or week; providing needed assistance at times of unexpected increases in staffing special projects; and providing needed assistance during hours not covered by full-time shifts. The second category consisted of varied reasons, including screening job candidates for regular jobs, saving on wage and/or benefit costs, providing needed assistance during company restructuring or merger, filling positions with temporary workers for more than a year, saving on training costs, gaining special expertise possessed by this type of worker, accommodating employees’ wishes for part-time hours, and hiring part-time workers because of an inability to find qualified full- time workers. No comparable study has been done recently for the federal sector. However, in 1987, MSPB issued a report on temporary appointments in the federal government in which MSPB included the responses of 21 departments and independent agencies that, among other things, included a discussion of the reasons agencies cited for using temporary appointments. According to the MSPB report, most agencies expressed their responses in general terms concerning positions not expected to last more than 1 year, seasonal positions, part-time and intermittent positions that are not clearly of a continuing continuing positions when temporarily vacated for periods of less than 1 year. In addition, some agencies provided specific examples, including hiring postgraduate students to work on research projects that will last temporarily placing candidates in less sensitive positions while they wait for final security clearances; placing workers in continuing positions that could not be filled permanently due to budget cuts; filling shortage category and hard-to-fill positions, pending certification preventing a loss of candidates to private industry in occupations like computer specialist by having the ability to hire such candidates in 2 or 3 weeks with conversion to permanent employment at a later date. According to the MSPB report, the last two reasons, in particular, indicated possible merit system concerns. Other studies that provide reasons agencies cited for using temporary employees concern specific agencies or agency components. These reasons included using temporary employees to meet fluctuations in workload, to address uncertain funding, and to screen candidates before hiring them permanently. According to OPM, the last two reasons are not appropriate uses of temporary limited employees in the federal government. As with its other regulations, OPM is responsible for ensuring that agencies adhere to its regulations concerning temporary employees. In 1994, OPM revised its regulations governing temporary appointments. OPM stated that its intention in revising the regulations was to ensure that temporary limited employees were used to meet truly short-term needs and were not serving for years under a series of temporary appointments without many of the benefits afforded other long-term employees. The 10 agencies that are the predominant users of temporary limited employees stated that they have been ensuring the need for individual temporary appointments and monitoring the time limits imposed on such appointments. According to OPM and agency officials, however, neither OPM nor any of the 10 agencies have been monitoring the total years of continuous temporary employment by these individuals. CPDF data, the best available information, show that of those temporary limited employees hired governmentwide in fiscal year 2000, about 16,000, or about 11 percent, had 5 or more years of federal service. However, limitations in the CPDF data prevent a determination of the number of individuals who spend long periods of continuous federal service in temporary limited positions without many of the benefits afforded other long-term employees. In 1994, OPM revised its regulations governing the use of temporary limited appointments to help ensure that such employees are used to meet truly short-term needs. Congressional hearings and information from other sources prompted OPM to act because some temporary limited employees were serving for years under a series of temporary limited appointments without many of the benefits afforded other long-term employees. For example, a 1992 OPM study reported that many nonpermanent seasonal employees in the land management agencies were “making a career” out of temporary work. The revised regulations reduced the time limit for individual temporary limited appointments from 4 to 2 continuous years of temporary service in a position. OPM’s revised regulations governing temporary appointments also generally limit agencies from refilling any position or its successor (i.e., a position that replaces and absorbs the original position) with a temporary appointment if that position had been filled by a temporary appointment in either the competitive or excepted service for a total of 2 years during the preceding 3 years. Positions involving seasonal work (i.e., work that involves annual recurring periods of less than 6 months) or intermittent work (i.e., work that involves sporadic or irregular intervals) are exceptions to such limits and restrictions. Under its regulations governing temporary limited appointments in the competitive service, OPM requires the supervisor of each position filled by temporary limited appointment to certify that the employment need is truly temporary and that the proposed appointment meets the regulatory time limits. The regulations do not require such certification for excepted service temporary limited appointments. We contacted the 10 agencies that we identified as the predominant users of temporary limited employees to identify steps that they were taking to ensure the appropriate use of such employees. Officials from the 10 agencies generally stated that they monitor the time limits on individual temporary limited appointments to ensure that such appointments adhere to the regulatory time limits and that they rely on the supervisors of such employees to ensure that the employment needs are truly temporary. According to an OPM official, holding agencies accountable for compliance with OPM’s temporary limited employment regulations is necessary for sound human resources administration. The official stated that OPM monitors agencies’ compliance with temporary limited employment during the evaluation visits conducted by OMSOE, which assesses agencies’ effectiveness in ensuring compliance with personnel laws and regulations. According to OMSOE, each of the departmental agencies and independent agencies with larger numbers of employees is subject to review every 4 years, and each of the smaller independent agencies is reviewed every 5 years. An OPM official said that OMSOE routinely includes some individual temporary appointments in its periodic oversight reviews but generally does not look at the work history of temporary limited employees serving in those appointments. The officials said that unless OMSOE knew in advance or saw a problem based on prior audit reports or other sources, it would not focus on temporary limited employees. Because of the typically limited nature of its reviews of temporary limited appointments, OMSOE’s reviews of agencies are unlikely to uncover instances of long-term temporary limited employment. In reviewing authorities used by an agency, OMSOE follows a standard audit procedure of selecting a judgmental sample of appointments for review. OMSOE uses a “problem oriented” sampling to select appointments. That means that if OMSOE officials have identified problems with a specific type of appointment through such sources as employee complaints and periodic employee attitude surveys, the audit team will include some of those appointments in the sample of appointments it reviews. If temporary limited appointments were suspected of being a problem area, the review might involve more work in this area. For example, because of an indication of possible inappropriate use of the appointing authority for temporary limited employees at the Department of the Interior, OMSOE did an assessment of seasonal employment at the National Park Service. In 1998, after completing its review, OMSOE reported that a number of parks in the Department of the Interior’s National Park Service with seasons lasting longer than 6 months were improperly filling seasonal positions with temporary limited appointments. However, according to an OMSOE official, OMSOE normally looked at the appropriateness of individual appointments and other aspects of compliance with OPM regulations. There are several ways that temporary limited employees can work for more than the 2-year limit on individual temporary appointments. In its regulations, OPM recognizes circumstances when agencies may require the service of temporary limited employees beyond the allowed 2 years. To extend a temporary limited appointment in the same position beyond the maximum of 2 years, agency officials must request and obtain approval from OPM. According to OPM, in fiscal year 1998, it approved 110 requests covering 332 employees; in fiscal year 1999, 165 requests covering 426 employees; and in fiscal year 2000, 180 requests covering 418 employees. Moreover, temporary limited employees can serve for continuous years under different temporary appointments or in the same appointment without an extension from OPM. If it involves a break in service of 3 days or less, an agency can reappoint or convert a temporary limited employee from one temporary appointment to another temporary appointment many times over a period of years and not conflict with OPM’s regulations. In addition, after 3 days have elapsed after a temporary appointment ends, an agency can rehire the employee using a new temporary limited appointment as long as it does not involve the same basic duties, the same major subdivision of the agency, and the same local commuting area as the original appointment. However, the CPDF does not contain the necessary information to identify whether new temporary appointments were formerly temporary limited employees. As shown in table 3, CPDF data show that from fiscal years 1991 through 2000, between 30 and 46 percent of the temporary limited employees hired annually governmentwide were conversions within an agency. According to an OPM official, there is no limit on the number of times that an agency can convert a temporary limited employee to another temporary limited appointment within the same agency as long as two conditions are met. First, conversions to competitive temporary appointments must be accomplished using a competitive selection method or must be based on noncompetitive eligibility. Second, the regulatory provisions limiting appointments to successor positions must be met. Finally, agencies can also exceed the general time limits of some temporary limited employees. Under OPM regulations, an agency can appoint and extend an employee in a seasonal or intermittent temporary limited position without regard to the 2-year general time limit as long as the time the employee worked annually was less than 6 months, or 1,040 hours. It is also permissible for different subunits of an agency to hire the same person for more than one seasonal appointment lasting for up to 6 months. Thus, a seasonal temporary limited employee could work full time for two subunits in an agency under two different 6-month temporary limited appointments in the same year. CPDF data show that from fiscal years 1991 through 2000, between 25 and 36 percent of temporary limited employees hired had a seasonal or intermittent work schedule. These scenarios could, as OPM reported in 1992, result in nonpermanent employment—which is intended for short-term staffing needs—becoming quasipermanent. In that report, OPM focused on seasonal temporary employment at land management agencies and stated that many nonpermanent employees were making a career out of temporary work in these agencies. OPM reported that thousands of park rangers on temporary seasonal appointments work the summer season in one park and the winter season in another, working virtually full time on temporary appointments. An OPM official said that OPM’s oversight policy is to look at each park as a separate employer. This would permit such situations to continue today. In addition, OPM reported that more than 20 percent of the temporary workforce at land management agencies had held 10 or more temporary appointments or extensions to existing appointments. According to a September 1994 MSPB report, retaining temporary employees for extended periods (8 or 10 years or more) through the use of temporary appointments is directly contrary to the explicit intent of the temporary employment authority and denies employees involved many of the benefits available to other long-term employees. We analyzed CPDF data to estimate the extent to which individuals may be spending long periods in federal service as temporary limited employees without many of the benefits afforded other long-term employees. Our analysis showed that of the temporary limited employees hired in fiscal year 2000, 78 percent had federal service of 2 years or less. However, CPDF data also showed about 16,000, or about 11 percent, had 5 or more years of federal service. Table 4 shows a breakdown by type of temporary limited employee. The information in table 4, however, is imprecise because of limitations in the data available in the CPDF, specifically in service computation dates. A service computation date allows OPM and agencies to track an employee’s creditable years of federal service (civilian and military) toward retirement and other benefits. For each federal employee, this date is adjusted with every transfer, separation, or reinstatement experience over the course of the employee’s career. The purpose of having a service computation date is so that at any point in time there is a reasonably accurate measure of an employee’s length of service. The service computation date includes permanent federal employment as well as temporary service without regard to when such service was performed. For example, a current temporary limited employee’s service computation date indicating 10 years of service could include years of prior military service, permanent federal civilian service, and temporary limited employment over an extended period with substantial gaps between appointments. Although CPDF data are the best available information and show that some temporary limited employees had been working for long periods in federal service, it is not possible to determine how many temporary limited employees actually worked for continuous extended periods in temporary limited appointments. Most of the 16,232 temporary limited employees hired in fiscal year 2000 who had 5 or more years in federal service were hired under seasonal appointments. As mentioned earlier, as long as employees hired under seasonal appointments work less than 1,040 hours per appointment, OPM’s regulations allow agencies to hire and extend such employees for years. Because agencies reported to us that they were monitoring only individual appointments, they would not necessarily know whether seasonal or other temporary limited employees might have been working for 5 or more years. They also might not know whether employees serving in seasonal appointments could have been hired for more than one seasonal appointment in any given year. As was the case with the park ranger example cited earlier, such employees could be working on two separate 6- month seasonal appointments—virtually full time on temporary appointments—without an agency or OPM being aware of it and without many of the benefits afforded other long-term employees. According to officials from OPM and the 10 agencies that we identified as the predominant users of temporary limited employees, neither OPM nor the agencies monitor the total length of service for temporary limited employees. According to OPM officials, identifying the total length of continuous service of temporary limited employees would require doing a “longitudinal,” or historical study tracing the service of individual employees back in time. From fiscal years 1991 through 2000, the majority of temporary limited employees were full-time hires in white-collar occupations, eligible to receive annual pay adjustments, overtime, and premium pay and generally earning annual and sick leave. These employees did not receive retirement and life insurance benefits but could buy health insurance after they worked for more than 1 continuous year if they were willing to pay the full cost of the insurance. In 1994, OPM revised its regulations governing temporary limited employees, generally creating a 2-year limit for each temporary appointment. OPM stated that its intention in revising the regulations was to help ensure that temporary limited employees would be used to meet truly short-term needs and not serve for years without many of the benefits afforded other long-term employees. However, the regulations do not preclude agencies from hiring temporary limited employees to work in a series of extensions, reappointments, and appointments. Thus, there seems to be an inconsistency between OPM’s stated intent and what is permissible under the provisions of its regulations. The regulations allow agencies to continue a pattern of repetitive temporary appointments that result in long-term temporary limited employees not receiving many of the benefits available to other long-term employees. CPDF data on the total years of service of temporary limited employees show that of such employees hired governmentwide in fiscal year 2000, about 16,000, or 11 percent, had 5 or more years of federal service. However, the limitations of these data combined with the fact that neither OPM nor agencies monitor the total years of temporary employment by temporary limited employees raise a concern that temporary limited employees could be serving for many years under a series of appointments. In addition, OMSOE reviews of agencies are unlikely to uncover incidents of long-term temporary limited employment because they typically look only at individual appointments but not the work history of temporary limited employees serving in those appointments. The CPDF data available to OPM and agencies for determining the time federal employees spend in federal service include all federal service, both temporary and permanent federal employment, without regard to the total length of time over which such service was performed. Neither OPM nor agencies collect the necessary information that would identify whether, in fact, temporary limited employees were working continuously for years. There is no way to tell from the CPDF whether employees might be serving in temporary limited appointments for continuous extended periods or how many may be receiving benefits, for example, as a result of retiring from prior federal service. Identifying the total length of continuous service of temporary limited employees would require doing a “longitudinal,” or historical study tracing the service of individual employees back in time. We recommend that the director of OPM direct OMSOE to conduct a study to identify the number of temporary limited employees who have been working for continuous extended periods in temporary limited appointments and the reasons and conditions that permitted such cases to occur. The director should use the results of this study to modify the regulations governing temporary limited employees to address any problem areas found. In addition, the director should require OMSOE to include a sample of temporary limited employees and their work histories as part of its periodic oversight reviews of agencies. We sent a draft report to OPM in which we proposed that the director of OPM clarify regulations on temporary limited employees so that they address the amount of time such employees may serve in a series of temporary appointments and better track compliance with the revised regulations. We discussed this draft with OPM officials, who did not believe that enough information was available to determine the nature of any problems related to temporary limited employees working for extended periods and how to revise the regulations. Therefore, we revised the draft report to recommend that the director of OPM direct the agency to conduct a study to identify the number of temporary limited employees who have been working for continuous extended periods in temporary limited appointments and the reasons and conditions that permitted such cases to occur. We also recommended that the director require OMSOE to include a sample of temporary limited employees and their work histories as part of its periodic oversight reviews of agencies. In a letter dated February 19, 2002, (see app. III) the director of OPM provided comments on the revised draft of this report. OPM agreed with both of these recommendations and said that they would be implemented. Our other recommendation was that the director should use the results of the recommended study to modify the regulations governing temporary limited employees to address any problem areas found. OPM did not specify precisely what it would do in response to this recommendation but said that any problems identified would be addressed through recommended or required corrective actions. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the date of this letter. At that time, we will send copies of this report to the chairman and ranking member, Senate Committee on Governmental Affairs; the chairman and ranking member, House Committee on Government Reform; and the director of the Office of Personnel Management. We will also send copies of this report to the heads of the 10 agencies that participated in our survey and other interested parties. We will also make copies available to others on request. Please contact me on (202) 512-6806 if you or your staff have questions. Key contributors to this report are listed in appendix IV. The General Accounting Office has been asked by Senators Barbara Mikulski and Paul Sarbanes to obtain information on the federal government’s use of temporary employees. Specifically, the requestors are interested in the use of temporary limited employees, as defined in the Code of Federal Regulations (5 C.F.R. 316.401), excluding those with provisional appointments (5 C.F.R. 316.403). Federal agencies may also hire temporary limited employees under agency-specific hiring authority. Temporary limited employees are used to fill short-term needs (that is, the initial appointment may not exceed 1 year and generally may be extended up to a maximum of 1 additional year), although temporary limited appointments that involve intermittent and seasonal work may exceed the 2-year limit. We are sending this questionnaire to 10 federal agencies whose selected components are the major users of temporary limited employees. The information that we are requesting is not available from either the Office of Personnel Management (OPM) or its Central Personnel Data File (CPDF). Please provide this questionnaire to the agency component indicated below on this page and have the component complete the questionnaire. Then, return the completed questionnaires for all of the components in a single group to us, along with any additional requested information, within 15 working days of receipt to the address listed below. You may fax your response to us on (202) 512-4516, to the attention of Kiki Theodoropoulos, and follow up with copies of any additional information by mail or courier. The return address is: U.S. General Accounting Office Attention: Kiki Theodoropoulos 441 G Street, N.W., Room 2908 Washington, D.C. 20548 If you have any questions, please contact Kiki Theodoropoulos on (202) 512-4579 or at [email protected] or Molly Gleeson on (202) 512-4940 or at [email protected]. Thank you for your cooperation. Please provide the following information: Name of person completing questionnaire: __________________________________ Title of person completing questionnaire: __________________________________ Telephone number: (_____)_____________ Fax number: (_____)_____________ E-mail address: ____________________________ According to OPM’s Central Personnel Data File (CPDF), _____________ hired _________ temporary limited employees, as defined in 5 C.F.R. 316.401, during fiscal year 2000. Not included are temporary limited employees with provisional appointments under 5 C.F.R. 316.403. For each of the reasons listed below, please provide: the approximate number of temporary limited employees who were hired by your agency component during fiscal year 2000 for each of the reasons listed below and the six most prevalent (in terms of number hired) occupations of temporary limited employees hired for each reason. (Enter occupational series code and title. Enter a maximum of six occupations.) According to the CPDF, ______________ hired the following numbers of temporary limited employees, as defined in 5 C.F.R. 316.401, for fiscal years 1995 through 2000. Temporary limited employees with provisional appointments under 5 C.F.R. 316.403 were excluded. In addition to the temporary hires identified in question 1, did your agency hire under authorities other than 5 C.F.R. 316.401 temporary employees whose initial appointment may not exceed 1 year and generally may be extended up to a maximum of 1 additional year (excluding provisional appointments under 5 C.F.R. 316.403)? These other authorities could include excepted service and agency specific authorities. (Check one.) 1. ❒ Yes ➜ Continue with questions 4 through 6. 2. ❒ No ➜ Questionnaire is now complete. Please provide the following information for the temporary employees hired in fiscal year 2000 under authorities other than 5 C.F.R. 316.401: the approximate number of temporary employees who were hired by your agency during fiscal year 2000 for each of the reasons listed below and the six most prevalent (in terms of number hired) occupations of temporary limited employees hired for each reason. (Enter occupational series code and title. Enter a maximum of six occupations.) Please provide the following information for the temporary employees hired in fiscal year 2000 under authorities other than 5 C.F.R. 316.401: the nature of action code the legal authority code the title of the legal authority the number of employees hired under this legal authority, and the benefits (if any) that are available to these employees. Benefits available (Check all that apply. If none, check “Other” and specify “none”.) ❒ Retirement ❒ Health Insurance ❒ Life insurance ❒ Other - specify below: _____________________________________ ❒ Retirement ❒ Health Insurance ❒ Life insurance ❒ Other - specify below: _____________________________________ ❒ Retirement ❒ Health Insurance ❒ Life insurance ❒ Other - specify below: _____________________________________ ❒ Retirement ❒ Health Insurance ❒ Life insurance ❒ Other - specify below: _____________________________________ ❒ Retirement ❒ Health Insurance ❒ Life insurance ❒ Other - specify below: _____________________________________ ❒ Retirement ❒ Health Insurance ❒ Life insurance ❒ Other - specify below: _____________________________________ ❒ Retirement ❒ Health Insurance ❒ Life insurance ❒ Other - specify below: _____________________________________ ❒ Retirement ❒ Health Insurance ❒ Life insurance ❒ Other - specify below: _____________________________________ In addition to the authorities in question 5, does your agency have additional agency-specific regulations, instructions, or guidance for hiring and providing benefits for such temporary employees? 1. ❒ Yes ➜ Please return a copy of the instructions or guidance with the completed Thank you very much for your assistance. Senators Barbara A. Mikulski and Paul S. Sarbanes asked us to gather information on federal civilian temporary employees, specifically temporary limited employees. Our objectives were to (1) identify the federal agencies that are the predominant users of temporary limited employees and the job characteristics of such employees (including work schedules, occupations, grade levels, and benefits); (2) discuss the primary reasons agencies give for using temporary limited employees; and (3) compare the federal government’s use of temporary limited employees with that of the private sector. In addition, we agreed to identify steps OPM has taken to ensure the appropriate use of temporary limited employees and whether long-term use of temporary limited employees still exists. To identify the federal agencies that are the predominant users of temporary limited employees and the job characteristics of such employees, OPM initially provided us with summary data listing temporary limited employment by agency on a quarterly basis from March 1999 through March 2000. On the basis of OPM’s list, we identified agencies as predominant users of temporary limited employees if they had 1,000 or more temporary limited employees on-board as of March 30, 2000. Ten agencies met our criterion for being predominant users: the departments of Agriculture, Commerce, Defense, HHS, the Interior, Justice, State, the Treasury, and VA and FEMA. These 10 agencies accounted for 94 percent of the executive branch’s temporary limited workforce on-board as of March 30, 2000, according to data provided by an OPM official. We then reviewed temporary limited employment data contained in OPM’s CPDF, a database that contains personnel data for most of the executive branch agencies, including all of the cabinet departments, independent agencies, commissions, and councils. To analyze CPDF data, we used an approach that an OPM official said would extract data from the CPDF on temporary limited employees. During our analysis, we found that OPM’s approach extracted data on other types of temporary employees (who can be appointed for more than 1 year and are entitled to the same benefits as permanent employees) as well as temporary limited employees. The OPM official later confirmed that OPM’s approach captured other types of temporary employees. Because OPM’s approach captured more than just temporary limited employee data, we had to use another approach and criteria to select data from the CPDF on competitive service temporary limited employees and excepted service employees who meet the temporary limited criteria. As there is no code in the CPDF to identify which current federal employees are temporary limited employees, we reviewed temporary limited appointments and conversions, which are identifiable. For the competitive service, we reviewed the nature of action codes (NOAC) and legal authorities for temporary limited appointments defined in OPM’s Guide to Processing Personnel Actions and were able to clearly identify the applicable NOACs and legal authorities for these employees. We checked these codes and authorities in our later discussions with OPM and agency officials and agencies’ responses to our questionnaires. For the excepted service employees who met the temporary limited criteria, we identified the most likely NOACs and legal authorities from information we obtained from (1) our contacts with OPM and agency officials and (2) agency responses to our questionnaire in which we asked the agencies to provide us with NOACs and legal authorities for excepted service temporary limited employees for fiscal year 2000. We did not verify the reliability of the nature of action and legal authority data in the CPDF used to identify temporary limited employees. For the excepted service, we used NOACs and legal authorities reported to us in the questionnaires except where they appeared to be in error. This occurred in a very few instances. We also reviewed OPM’s Guide to Processing Personnel Actions to identify any additional NOACs or legal authorities to include. We identified no other NOACs. We included only those legal authorities with not-to-exceed (NTE) dates, and we excluded those legal authorities where we could not determine if they were for permanent or temporary appointments and the authorities were not listed in the questionnaire responses. We analyzed the temporary limited employment data contained in the CPDF from fiscal year 1991 through 2000, and included those employees hired throughout the year. We defined hires to include appointments (i.e., when the person is not already an employee of an agency) and conversions (i.e., appointments when a person is employed by an agency in a different position or under a different hiring authority). We did not analyze employees on-board as of a specific date (e.g., September 30) because such employees may work for short periods of time, and the end of the fiscal year would only capture a moment in time, according to agency human resources officials we interviewed. To identify the job characteristics of temporary limited employees governmentwide, we reviewed data available in the CPDF on work schedules and grade levels for fiscal years 1991 through 2000 and occupations for fiscal year 2000. To identify the benefits available to these employees, we interviewed OPM officials and reviewed studies from OPM and MSPB and applicable laws and regulations. To identify the primary reasons agencies give for using temporary limited employees, we designed and pretested with 2 agencies a questionnaire that we later sent to the 10 agencies that we identified as being the predominant users of such employees based on data provided by an OPM official. (See app. I for a copy of the questionnaire.) In designing the questionnaire, we discussed the questionnaire contents with OPM officials and reviewed reports and studies on temporary employees in the federal government. For 7 of the 10 agencies, we asked their five components that were the largest users of temporary limited employees to respond to the questionnaire. For the remaining three agencies, including the four components of Defense, the questionnaire responses covered the entire agency. Table 5 lists the 41 agencies and components to which we sent the questionnaire. During the pretests, agency human resource officials told us that they could not provide the information we were requesting on an agencywide basis. We identified the components that were the largest users of temporary limited employees based on information provided by OPM officials and CPDF data as of September 30, 1999. We received completed questionnaires from all 41 agencies and components. The information in this review applies only to those agencies and agency components to which we sent questionnaires. For three agencies, the departments of Defense and State and FEMA, the information applies to the entire agency. For seven agencies, their five components comprised 73 to 98 percent of total temporary limited employees hired in fiscal year 2000. We used this information to represent the 10 agencies surveyed, but the information cannot be projected to these 7 agencies or governmentwide. We did not report on the reasons for increases, decreases, or fluctuations in temporary limited employees hired from fiscal years 1995 through 2000 because the reasons were so varied that an analysis would not be meaningful. Only one agency and one agency component reported having additional agency-specific instructions for their excepted service temporary limited employees. We did not verify the accuracy of the data provided by the agencies. To compare the federal government’s use of temporary limited employees with that of the private sector, we conducted a literature search to identify studies on federal and private sector uses of temporary employees. To ensure that we identified all relevant studies, we also contacted OPM, MSPB, and CRS officials, because their agencies had previously conducted studies concerning temporary limited employees. We analyzed the reasons cited in the studies as to why employers use temporary employees and compared similarities and differences for both sectors. To identify steps OPM has taken to ensure the appropriate use of temporary limited employees, we analyzed CPDF data, reviewed OPM and MSPB studies on temporary employment, interviewed OPM officials, and reviewed current and prior OPM regulations and guidance on temporary employees. We also asked the 10 agencies that we identified as being predominant users of temporary limited employees about the steps they were taking to ensure the appropriate use of such employees. For 8 of the 10 agencies, we asked the component that was the largest user of temporary limited employees in fiscal year 2000 to respond to our inquiries. The eight agencies’ components were the Forest Service, Bureau of Census, Defense components other than the military services, National Institutes of Health, National Park Service, Executive Office for U.S. Attorneys, Internal Revenue Service, and Veterans Health Administration. The two agencies were the Department of State and FEMA. All responded to our inquiries. To identify whether long-term use of temporary limited employees still exists, we first had to determine how many employees appointed to temporary limited positions had more than 2 years of prior creditable service; to do so, we used the service computation dates in the CPDF. We subtracted military service from the total creditable service. Because creditable civilian service includes all prior civilian federal service, including any permanent federal employment, and because there may be gaps between federal service, creditable service as reflected in the service computation date for temporary limited employees cannot be used by itself to identify continuous years of service under a series of appointments. We did our work in Washington, D.C., from October 2000 through February 2002, in accordance with generally accepted government auditing standards. In addition to the individual named above, Richard W. Caradine, Ronald J. Cormier, Thomas G. Dowdal, V. Bruce Goddard, Robert J. Heitzman, Stuart M. Kaufman, Michael J. O’Donnell, Molly K. Gleeson, Rebecca Shea, Kiki Theodoropoulos, and Gregory H. Wilmoth, made key contributions to this report. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full-text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO E-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily e-mail alert for newly released products” under the GAO Reports heading. Web site: www.gao.gov/fraudnet/fraudnet.htm, E-mail: [email protected], or 1-800-424-5454 or (202) 512-7470 (automated answering system).
In the early 1990s, concerns arose that federal agencies were retaining employees in an ongoing series of temporary appointments without benefits or tenure. For fiscal years 1991 through 2000, 10 agencies accounted for 90 percent of all temporary limited employees hired governmentwide. During this period, the number of temporary limited employees hired governmentwide declined by 47 percent--from 282,135 in fiscal year 1991 to 150,395 in fiscal year 2000. Most temporary limited employees were full-time hires in white-collar jobs who received some benefits, including annual pay adjustments and premium pay. A survey done at the 10 agencies indicated that seasonal work was the primary reason for using such employees, followed by peak workloads. The office automation clerical and assistance series was the most often reported occupational series for fiscal year 2000. Recent studies suggest that federal agencies and private sector firms use temporary employees for similar reasons--often staffing flexibility. Because temporary limited employees were serving for years under temporary appointments without the benefits afforded other long-term employees, the Office of Personnel Management (OPM) revised its regulations in 1994 to ensure that temporary employees were "used to meet truly short-term needs." The revised regulations created a two-year limit for individual temporary appointments in both the competitive and excepted service. OPM officials said that the Office of Merit Systems Oversight and Effectiveness, when assessing agencies' compliance with personnel laws and regulations, routinely included some individual temporary appointments in its periodic oversight reviews, but generally did not look at the work history of temporary limited employees in those appointments. OPM data show that many temporary limited employees hired in fiscal year 2000 had worked for the federal government for at least five years.
According to the American Financial Services Association (AFSA), some of its members have been issuing live loan checks since the 1980s. Live loan checks are delivered in the mail and are preapproved offers of credit. Consumers are selected to receive the loan offers if they meet certain credit criteria. These preapproved offers of credit are based, in part, on a consumer’s credit score. Credit bureaus develop scores by assessing various types of information collected from a large pool of borrowers, including borrowers with good payment histories and others with poor payment histories, to estimate the credit risk associated with different types of loans. Credit scoring systems use statistical analysis to identify and weigh the characteristics of borrowers who have been most likely to make loan payments. For example, borrowers with little or no history of delinquent payments receive higher credit scores than borrowers with many delinquent payments. Most widely used credit scoring systems have a range of scores from 350 to 900. Borrowers with higher scores are considered more creditworthy because they are more likely to repay the loan on time and in full than are borrowers with lower credit scores. While comprehensive data on live loan checks are not available, data provided by one lender depict its loans as amortizing loans with interest rates below credit card rates. According to this lender, the recipients of its live loan checks had high credit scores and good credit histories. Chase and Fleet officials provided us with the materials they sent to the recipients of live loan checks. The materials include information disclosing that the check represents a loan and presenting the terms and conditions of the loan. Voluntary industry standards also call for such disclosure. Comprehensive industry data on the average live loan check and the borrower using this product are not available. Fleet and Chase officials, however, provided us with information on their live loan check profile. According to Fleet, borrowers receive live loan checks ranging from $3,000 to $10,000, based on the lender’s estimate of the recipient’s predicted ability to repay the loan. Prior to selection, recipients had demonstrated their ability to manage debt by having satisfactory payment histories. According to Fleet and Chase officials, interest rates on loans resulting from live loan checks have ranged from 12.9 percent to 15.9 percent. The repayment terms for these loans ranged from 48 months to 60 months and are amortized. In addition, Fleet’s live loan checks generally were only valid for 6 weeks from the date of issuance; this provision is intended to lessen the risk of using outdated credit data as a basis for assessing a potential borrower’s creditworthiness. Borrowers’ credit scores were used as the primary factor in determining whether to offer a live loan check, and credit criteria were conservative in the lender’s view. Fleet officials told us that their borrowers had an average credit score of 730, with a minimum cut-off of 690. A credit score of 730, for example, implies odds of 125 to 1 against defaulting on an unsecured loan—that is, the estimated probability of default is less than 1 percent. The officials said that Fleet borrowers primarily resided within the established franchise area where the bank offers retail banking services. The borrower had a median household income of $44,000. In addition to having to meet a minimum credit score, borrowers also were to meet minimum requirements set by proprietary risk and bankruptcy models, according to Fleet officials. The borrower’s average debt utilization—that is, the proportion of available credit limits actually used in unsecured debt on current revolving credit sources—was 29 percent, which the bank believes estimates the borrower’s propensity to use credit. Also, borrowers had no prior record of bankruptcy, foreclosure, tax liens, or garnishments. According to AFSA, its members who offer live loan check programs reported that borrowers extended live loan check offers are generally between 35 and 50 years of age with income levels between $35,000 and $55,000. Interviews with lenders, bank regulators, and the Federal Trade Commission (FTC), which is responsible for, among other things, fostering free and fair business competition and preventing monopolies and activities in restraint of trade, revealed few complaints that live loan checks terms were not disclosed to borrowers. According to lenders, disclosure requirements are intended to protect the borrower and the lender. Office of the Comptroller of the Currency (OCC) officials consider live loan checks to be like any other small consumer loans in needing to meet Truth In Lending Act requirements ensuring that creditors disclose credit terms and the cost of credit as an annual percentage rate (APR). We spoke with lenders about the disclosure features of their live loan check programs. Fleet and Chase officials provided us with copies of their disclosure materials, which contained information that identified the loan check as a loan and clearly specified the interest rate and the terms and conditions of the loan. In the lenders’ solicitation materials, for example, there were several statements such as, “this is a check for a loan” or “loan check.” The interest rate, repayment terms, and other terms were displayed. The live loan checks were labeled “non-transferable” and “for deposit only” to help ensure that the customers would take the checks directly to their own banks for deposit. Chase officials told us that, under their policy, a customer is to be called by a Chase bank official when the check is presented by the depository bank to Chase for payment, to ensure that the intended person actually deposited the check. AFSA issued voluntary standards for live loan checks on September 17, 1997, and expanded them on October 29, 1997. According to an AFSA official, the voluntary standards for live loan checks are intended to provide extra protection for consumers. Bank officials told us that they abide by the voluntary standards to avoid the risk of creating a negative image of the live loan check program. AFSA voluntary standards are as follows: Live loan checks sent by mail or other similar instruments offered by AFSA members are to be negotiable up to 6 months after receipt. A lender’s printed material accompanying the offer must advise the consumer to void and destroy the instrument if it is not going to be negotiated. Live loan checks sent by mail must include the following disclosure: “This is a solicitation for a loan—read the enclosed disclosures before signing and cashing this check.” Solicitations are to be mailed in envelopes with no indication that a negotiable instrument is inside. Envelopes are to be marked with instructions informing the Postal Service not to forward the item if the intended recipient is no longer at the address on the envelope. In the event a live loan check-by-mail offer is stolen or fraudulently cashed, the intended recipient is to have no liability for the loan obligation. In order to deter theft or forgery, a consumer is to be asked to complete a confirmation statement provided by the creditor. Public and private sector officials told us that, while there was no comprehensive list of institutions with live loan check programs, several institutions were known to have offered such programs. Banks included Fleet in Boston, Massachusetts; Chase Manhattan Bank in New York, New York; Signet Bank in Richmond, Virginia; First USA in Wilmington, Delaware; and BancOne Corporation in Columbus, Ohio. Nonbanks included Capital One in Falls Church, Virginia, and Beneficial Corporation in Wilmington, Delaware. First Chicago NBD had conducted test marketing of live loan checks; a First Chicago official told us that the bank discontinued the program because the level of loss in a pilot program was not acceptable. Regulators and industry officials we interviewed also told us that no comprehensive data show the volume of live loan check activity. These officials also believed that it would be difficult for nonregulators to compile such industrywide information because individual financial institutions might be reluctant to release their proprietary data. Although comprehensive industry data were not available, Fleet officials provided us with information on Fleet’s live loan check program history. (See table 1.) Although a similar number of checks were mailed in 1997 as in 1996, Fleet experienced far fewer acceptances in 1997 compared with 1996. Fleet officials said that the decline in acceptances occurred because in 1997 the potential borrowers were primarily non-Fleet customers, who were less likely to recognize Fleet’s name. Public and private sector officials identified some benefits and risks associated with live loan checks for both borrowers and lenders. In general, the benefit for borrowers was the ease of obtaining the loan; the risks to a borrower were comparable to those for other unsecured loans. The Consumer Federation of America (CFA) told us that these loans could compound problems caused by high consumer debt. For lenders, the loans were often seen as profitable, with manageable risks. However, limited data exist on the losses associated with live loan checks. Fraud did not appear to be a widespread problem, although there was some concern among industry officials about how a potential borrower might be inconvenienced by fraud. First Chicago, however, discontinued making the loans because the losses during a pilot program were “not acceptable.” In the view of lenders, borrowers enjoyed benefits and risks comparable to those associated with conventionally marketed unsecured loans. Borrowers accepted unsecured live loan checks at identical or lower interest rates than the recipient would receive at a local loan office of the lender. These loans had predictable, fixed monthly repayment terms of 48 to 60 months. According to lenders, borrowers experience little risk beyond that normally associated with a loan because they are protected against all liability from fraud or misuse. Some public and private sector officials said that live loan checks could potentially increase the possibility of default and bankruptcy if the borrower misused credit by running up credit card balances. The executive director of CFA said that live loan checks would only compound the problems created by the abundance of unsecured, high-cost credit card debt. Two lenders, however, said that there was no evidence to show that borrowers would file for bankruptcy quicker as a result of accepting live loan checks instead of using credit cards. To date, it does not appear that many potential borrowers have been exposed to the risk of fraudulently cashed loan checks. Lenders we spoke with told us that the bank does not hold a consumer responsible if the check mailed to that consumer is deposited or forged by another individual. For example, Chase officials told us that, in the event that a live loan check were stolen, the intended recipient would not be charged if he or she signed an affidavit stating that the check had not been cashed by him or her. AFSA officials said that state and federal laws shield consumers from liability related to live loan checks and that lenders’ credit selection practices help reduce the rate of fraud. AFSA reported that the actual fraud on live loan checks has been extremely low, less than one-tenth of one percent of total mailings. AFSA believes that its voluntary standards ensure minimum inconvenience to the consumer in the event that a check is not cashed by the intended consumer. Public and private sector officials have not seen large levels of fraud involving live loan checks. OCC had no reported cases of fraudulent acts of cashing a live loan check. Federal Reserve officials said they do not believe that there is a significant problem with losses associated with live loan checks. Federal Reserve officials noted that the primary reason for the low rate of fraud is that rules governing check cashing practices act to deter fraud. The recipient’s rights in the case of a forged endorsement are generally governed by state law. Articles 3 and 4 of the Uniform Commercial Code have been adopted in almost every state and determine check negotiation procedures and liability for invalid checks. FTC and Federal Reserve officials said that they had not received many complaints about live loan checks that involved theft and fraud issues over a 2-year period. The executive director of CFA testified, however, that the consumer may experience considerable inconvenience if the live loan check is cashed by someone other than the intended recipient, and believed that a consumer should not experience any inconvenience if fraud occurs. Fleet and Chase officials told us that their live loan check programs met corporate profitability requirements and expanded their lines of credit and their loan business. According to bank officials, live loan check programs are attractive because they enable lenders to provide a broader range of consumer loan products. Lenders viewed live loan checks as a convenient means of delivering a fixed rate, closed ended, unsecured loan product to a consumer. Fleet officials said that live loan checks were moderately profitable loans. They said that the results of these loans were provided monthly to senior management to assess the results against expectations. Chase officials said that a benefit of the bank’s loan check program was that the net interest margin for live loan checks was higher than that for mortgage lending. Chase officials told us that prepayment rates for live loan checks are lower than those for mortgages. When interest rates decline, lower payments help cash flows remain more stable, which helps Chase to better manage its loan portfolio. In contrast, a decline in interest rates generally results in a rise in prepayments of some other loans. Chase officials also said that, by using good underwriting practices, they were able to manage credit risk. With regard to cases involving fraud, both lenders and bank regulatory agency officials said that lenders are to absorb all losses. With 155,000 loans accepted between 1995 and 1997, for example, Fleet reported 68 confirmed cases of fraud. Generally, in these cases, an unauthorized household member cashed the check. In order to prevent fraud, Fleet required that the borrower access funds only by depositing a check into a personal bank account. Once the live loan check was cleared, Fleet created an installment loan for the borrower. To reduce the risk of fraud, Fleet’s live loan check offers were only valid for 6 weeks. Federal bank regulators do not have any special supervisory programs for live loan checks. As noted earlier, OCC officials said that they review these loans in the same way as they do other small consumer loans. Fleet officials told us that monthly reports on these loans, which are distributed to senior Fleet management, are also provided to OCC, Fleet’s regulator. Federal Reserve examiners do not specifically monitor live loan check activities at Federal Reserve-regulated institutions. As part of their safety and soundness examinations, Federal Reserve examiners are to review risk models or other risk management systems to assess whether banks practice prudent behavior in their lending. Federal regulatory officials told us that industrywide live loan check activities are not tracked specifically. While Chase officials believed it was too soon to estimate their losses on live loan checks, we received data from Fleet concerning loss rates for its live loan check program. In 1996 and 1997, according to Fleet officials, the bank’s loss rates on live loan checks were lower than the credit card industry national averages. Using year-end balances, in 1996, Fleet said, it experienced a 1 percent loss rate compared to 5.96 percent in the credit card industry. In 1997, Fleet experienced a loss rate of 4.20 percent compared to 6.04 percent in the credit card industry. Fleet projected its 1998 live loan check losses to be similar to the credit card industry’s at 5 percent. Fleet officials said that they had set aside adequate reserves to cover anticipated losses. Fleet officials explained that the reason for the reported loss increase for live loan checks from 1996 to 1997 is that, typically, there are not many losses in the early years with a new loan product, and that Fleet was more cautious in marketing live loan checks. In the first year, 1995, Fleet marketed all of its live loan checks to its bank customers. According to Fleet officials, as the loans resulting from their live loan checks begin to mature, losses could increase. A First Chicago official told us that the bank discontinued its live loan check program because the level of loss was not acceptable. First Chicago conducted a live loan check pilot program in the summer of 1995 to determine whether offering immediate access to funds via checks would increase the likelihood that consumers would borrow money. The actual loss rate was not disclosed to us. To determine the characteristics of live loan checks, we gathered information on various aspects of individual loans, as well as on the average live loan check profile and the average borrower’s profile. To do this, we interviewed officials representing three live loan check lending institutions, an industry association, and a rating agency. We also reviewed publicly available information, including published articles that reported such characteristics. Although we did not independently verify these—or any—industry data, we corroborated evidence with other independent sources whenever possible. To identify the major organizations that mail live loan checks, we interviewed public and private sector officials. We selected officials to talk to, in part, on the basis of information obtained from other industry sources. For example, we talked with officials at Fleet and Chase. In addition, we spoke with First Chicago officials about whether a live loan check program existed at that institution because officials of other banks had informed us that this institution had cancelled its live loan check program. Moreover, we conducted a literature search and reviewed selected articles that reported on live loan check lenders and their activities. We also spoke to officials representing federal banking and thrift regulatory agencies. We obtained Fleet’s volume of live loan check lending in 1995, 1996, and 1997 and the expected volume in 1998 by interviewing Fleet officials; other lenders were not willing to provide volume data. We attempted to identify comprehensive, industrywide data for the volume of live loan checks by talking with officials representing an industry association, a consumer advocacy group, a rating agency, federal banking and thrift regulatory agencies, and two investment banks. In addition, we contacted officials representing another lender to corroborate information and to obtain additional volume data. To identify the benefits and risks of live loan checks for borrowers and lenders, we interviewed officials representing federal regulatory agencies and representatives from lending institutions, industry associations, and one rating agency. We reviewed articles and studies that reported benefits and risks associated with live loan check lending. We interviewed public and private sector officials, and reviewed selected federal and state regulations and laws, to gain an understanding of lender protection laws relevant to live loan checks. We also spoke with banking officials about losses associated with live loan checks. As agreed with your office, unless you announce the contents of this report earlier, we plan no further distribution until 30 days after the date of this letter. At that time, we will send copies of this report to the Ranking Minority Member of your Subcommittee, the Chairmen and Ranking Minority Members of other congressional committees with jurisdiction over finanical issues, the Chairman of the Board of Governors of the Federal Reserve System, the Comptroller of the Currency, the Director of the Office of Thrift Supervision, and other interested parties. We will also make copies available to others upon request. This report was prepared under the direction of James M. McDermott, Assistant Director, Financial Institutions and Markets Issues. Major contributors include Edwin J. Lane, Evaluator-in-Charge; Mitchell B. Rachlis, Senior Economist; and Becky K. Kennedy, Senior Evaluator. If you have any questions about this report, please call me on (202) 512-8678. Susan S. Westin Associate Director, Financial Institutions and Markets Issues The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on live loan checks, focusing on the: (1) characteristics of live loan checks and the major organizations that provide unsolicited loan checks; (2) volume of live loan checks in 1995, 1996, and 1997 and the expected volume in 1998; and (3) benefits and risks of live loan checks for the borrowers and lenders. GAO noted that: (1) once cashed, live loan checks result in unsecured consumer loans; (2) bank officials GAO interviewed told it that live loan checks are aimed at the most creditworthy customers--that is, those least likely to be delinquent or in default in making loan payments; (3) according to bank officials, such loans are made at interest rates ranging from 12.9 percent to 15.9 percent, compared to an average 16 percent for credit cards; (4) Fleet Bank officials told GAO that it has sent potential borrowers live loan checks ranging from $3,000 to $10,000 based on its estimate of the borrower's ability to repay the loan; (5) the repayment terms for these loans ranged from 48 months to 60 months, and the loans were amortized; (6) Fleet officials stated that borrowers generally have used the loan amounts for expenses such as home improvements, debt consolidation, and school expenses; (7) according to bank officials GAO interviewed, at least eight financial institutions have offered live loan checks; (8) of these eight financial institutions, six were banks: Chase Manhattan, Fleet, First USA Bank, Signet Bank, BancOne Corporation, and First Chicago NBD; (9) two were nonbanks: Capital One and Beneficial Corporation; (10) First Chicago stopped offering these loans after suffering a level of losses that it considered not acceptable during a pilot program; (11) public- and private-sector officials told GAO that comprehensive data on the volume of data were not available; (12) Fleet provided GAO with quantitative data on its live loan check program; (13) between 1995 and 1997, Fleet mailed 4.35 million live loan checks; (14) of these, approximately 155,000 borrowers cashed the checks and accepted the loans; (15) Fleet made over $680 million in loans through this program; (16) Fleet officials told GAO that it experienced 68 confirmed cases of fraud, which generally involved someone other than the intended recipient cashing the check; (17) public- and private-sector officials identified benefits and risks associated with live loan checks; (18) borrowers benefit from live loan checks because these checks meet their needs for immediate access to funds at interest rates competitive with those offered by credit cards; (19) risks to the borrowers include the potential for these loans to compound problems associated with high levels of consumer borrowing; and (20) Fleet and Chase informed GAO that, while loans initiated from cashing live loan checks were a small percentage of their bank assets, the programs thus far have been profitable, with manageable risks.
According to a 2010 Interior study, 97 percent of oil and gas production in federal waters occurs along the U.S. outer continental shelf of the Gulf of Mexico. The outer continental shelf is the submerged lands outside the territorial jurisdiction of all 50 states but within U.S. jurisdiction and control. The outer continental shelf contains an estimated 85 billion barrels of oil, and over half of this oil is located in the Gulf of Mexico.Significant reserves also exist in the outer continental shelf off Alaska. Interior is responsible for the oversight of oil and gas activities on the U.S. outer continental shelf, which includes submerged lands in federal waters off the coast of Alaska, in the Gulf of Mexico, and off the Atlantic and Pacific coasts.in the outer continental shelf for mineral development, including oil and gas exploration and production. The lease holder may operate the well or may hire other companies to perform drilling operations and other related services. Operators submit a series of documents to Interior for approval to drill, including an application for permit to drill and an oil spill response plan. In October 2010, Interior promulgated certain new requirements for the application for permit to drill process designed to prevent a blowout including, among other things, requiring independent third-party verification that the subsea blowout preventer is compatible with the specific well location and well design. The oil spill response plan is to include an operator’s proposed methods for ensuring that oil spill containment and recovery equipment and response personnel are mobilized and deployed in the event of a spill. This plan is to be implemented immediately following a spill. The plan is also to include an inventory of spill response resources such as materials and supplies, services, equipment, and response vessels available locally and regionally, as well as a description of the operator’s procedures for conducting monthly inspections and necessary maintenance of recovery equipment. As part of its oversight responsibilities, Interior is required to conduct scheduled and unscheduled inspections of offshore facilities, such as drilling rigs and production platforms. Equipment scheduled for inspections includes equipment designed to prevent or alleviate blowouts, fires, spillages, or other major accidents. As part of its responsibilities, Interior leases blocks of land Also under its oversight responsibilities, Interior issues guidance called a Notice to Lessees and Operators to clarify, supplement, or provide more detail about certain requirements, including requirements for applications for permit to drill. In response to the Deepwater Horizon incident, Interior issued a number of these notices, which, among other things, notified operators that Interior would be evaluating whether they had submitted adequate information about their well containment capabilities with their oil spill response plans. Specifically, in a November 2010 notice, Interior informed operators that it would evaluate whether operators could demonstrate that they had access to and could deploy well containment resources that would be adequate to promptly respond to a blowout or other loss of well control. This notice applies only to operators conducting operations using subsea blowout preventers, which are devices placed on wells to help maintain control over pressures in the well, or blowout preventers on floating facilities. Operators provide information on their well containment capabilities to Interior in a collection of documents that compose a well containment plan. According to Interior officials, all approved applications for permit to drill subject to the November 2010 notice have included a well containment plan. For additional information on the types of information operators provide in their well containment plans, see appendix I. There are three phases to produce oil or gas from a subsea well: drilling, completion, and production. During the drilling phase, operators drill a hole, called the wellbore, from the seafloor down to the reservoir of oil or gas. Early in this phase, a blowout preventer is placed on top of the wellhead, which, in turn, is installed on top of the wellbore to provide an interface between the wellbore and other equipment. A large-diameter pipe called the riser connects the drilling rig to the blowout preventer, and the drill pipe, drill bit, drilling mud, and casing are routed down to the well Industry officials we spoke with through the riser and blowout preventer.said that during the drilling phase, operators must constantly balance the pressure of the drilling mud inside the wellbore with the pressure from the surrounding formation thousands of feet below the seafloor. Interior officials explained that during this phase, operators may encounter a number of unknown well conditions, which if not controlled or corrected could pose a risk of a blowout. During a blowout, operators close the valves in the blowout preventer in an attempt to seal the wellbore and prevent oil and gas from escaping to the surface. Figure 1 illustrates the drilling phase. In the second phase, known as completion, the operator opens the wellbore to allow the flow of oil and gas from the reservoir, and installs equipment at the top of the wellbore to control and collect the oil and gas. The third phase is production, the extraction of oil or gas from the well. The difficulty of the drilling process can vary depending on the depth of the seafloor as well as the depth of the reservoir. According to a 2010 Interior study, the majority of oil production in the Gulf of Mexico occurs in deep water, which the study defined as 1,000 feet or more below sea level. and gas compared with wells in shallower water, in part because high reservoir pressures contribute to well productivity. These high pressures also make drilling deepwater wells significantly more dangerous than drilling shallow wells because, among other things, the increased pressure can exacerbate the effects of a blowout and make a well containment response more challenging. In the case of the well that the Deepwater Horizon was drilling, the seafloor was nearly 5,000 feet below sea level, and the operator, BP, drilled an additional 13,000 feet below the seafloor to reach the reservoir. According to a 2011 report by the National Commission on the BP Deepwater Horizon Oil Spill and Offshore Drilling, companies and governmental organizations have adopted definitions for deepwater ranging from 600 to 1,500 feet. that is specifically designed to cap a well after a blowout.blowout preventer is designed to manage drilling operations and prevent a blowout, a capping stack is designed to be deployed after a subsea blowout has already occurred. At the time of the Deepwater Horizon incident, there were few capping stacks in existence, and capabilities to support subsea well containment were limited. Since the Deepwater Horizon incident, the oil and gas industry has improved its capabilities to respond to a subsea well blowout in the Gulf of Mexico. In particular, operators have formed two new organizations that are expected to offer improved well containment capabilities, including more effective equipment and services, and expertise to member operators in the event of a well blowout. The subsea well containment capabilities available for the Gulf of Mexico consist primarily of existing technologies that have been modified to support well containment. Following the Deepwater Horizon incident, two not-for-profit organizations of oil and gas companies—the Marine Well Containment Company (MWCC) and the Helix Well Containment Group (HWCG)—formed to provide subsea well containment capabilities and support to operators in the Gulf of Mexico that enter into contracts with them. Once under contract and in the event of a spill, each of these well containment organizations is to provide certain containment equipment and services— capping stacks, vessels, and other resources necessary to respond to a subsea blowout—that operators can customize to their well containment needs. Equipment and services provided by these well containment organizations were developed in consultation with Interior and have been a key part of the well containment plans that operators submit to Interior, according to Interior officials. All of the operators subject to Interior’s November 2010 notice that have received permission to drill in the Gulf of Mexico since the moratorium was lifted have contracted with one or both of these well containment organizations to provide certain well containment equipment and resources in the event of a subsea blowout, according to these officials. In general, MWCC members include some of the largest operators with oil and gas exploration and production activities in the Gulf of Mexico, such as ExxonMobil, ConocoPhillips, Chevron, and Shell, and HWCG members include operators of various sizes. Some operators are members of both groups. More specifically: MWCC operates as an independent, stand-alone company where each of its 10 members has equal ownership and voting rights, and according to MWCC representatives, its members drilled approximately 70 percent of deepwater wells drilled in the U.S. Gulf of Mexico from 2007 through 2009. HWCG is a consortium of 24 operators, which, according to HWCG representatives, represent approximately 80 percent of the deepwater operators in the Gulf of Mexico. HWCG was created around the well containment capabilities offered by one deepwater services company whose equipment was used in the response to the Deepwater Horizon incident. According to representatives from both well containment organizations, members also commit to mutual aid agreements in which operators agree to provide equipment or other support to consortium members that experience a subsea well blowout. Representatives of the well containment organizations said that both HWCG and MWCC also offer services to nonmembers on a fee basis. Furthermore, representatives of the well containment organizations we spoke with generally agreed that should another incident like Deepwater Horizon occur, industry would mobilize to make available all of the equipment and services necessary to ensure a quick and effective response. In addition to equipment and services, both MWCC and HWCG provide overall plans that identify how the equipment and services are to be deployed and used; a schedule of activities to be followed; various contingencies for high-risk activities, such as straightening a bent wellhead that may have been damaged by a blowout; and names and contact information for technical experts who may be called on as required. According to Interior officials, operators would need to customize the well containment organizations’ overall plans to their specific well design. For example, the well pressure would determine the specifications of the capping stack, and the depth of the well would specify which ships have cables long enough to reach it. These customized plans comprise some of the documents in the well containment plans that operators submit to Interior. However, operators may need additional equipment and services that the well containment organizations do not provide. For example, the well containment organizations provide capping stacks for well containment, but in some–– if not all––cases the operator must identify a separate service provider to transport the capping stack to the site and deploy it to the well. In their well containment plans, operators also identify other needed equipment and services that may not be provided by the well containment organization, including debris removal equipment, remotely operated underwater vehicles, which are controlled from surface vessels and used to, among other things, clear debris, and apply chemicals called dispersants that are used to disperse leaked oil. The subsea well containment capabilities that MWCC and HWCG offer for the Gulf of Mexico consist primarily of established technologies that have been modified to support well containment. For example, according to industry representatives, capping stacks are devices similar to previously used blowout preventers and contain many of the same components. According to representatives from the two well containment organizations, the well containment capabilities (i.e., equipment, procedures, and processes that MWCC and HWCG would activate in the event of a subsea blowout in the Gulf of Mexico) incorporate established technologies commonly used for offshore well drilling. Well containment capabilities in the Gulf of Mexico include the following: containment equipment, including capping stacks used to shut in a well, and containment domes and top hats that are used to collect escaping oil and gas and flow them to the surface; subsea support systems, such as riser systems that direct captured oil and gas to surface vessels in the event that the well cannot be shut in completely; utility equipment, such as dispersant injection systems, hydrate inhibitor systems, and hydraulic power systems; manifolds and connection systems; and remotely operated vehicles; and surface vessels, such as multipurpose containment response vessels that can be configured to conduct a variety of drilling or containment activities, production vessels that can process captured oil, storage tankers to transport the captured oil, and other support vessels used to distribute dispersants and control remotely operated underwater vehicles. Figure 2 illustrates a subsea well containment response system. A key component of both MWCC and HWCG’s response capability is a capping stack. According to industry representatives, capping stacks are essentially lighter, specialized versions of blowout preventers that use similar components to stop or control the flow of oil and gas. The capping stacks built for MWCC and HWCG are designed to withstand the high pressures experienced in deepwater reservoirs—up to 15,000 pounds per square inch for the most recent capping stacks. These stacks can be deployed to 10,000 feet below sea level. Most capping stacks have multiple outlets that allow oil and gas to be routed to surface vessels and processed. Capping stacks are usually deployed on top of the blowout preventer but can also be installed on other points, including the wellhead. Industry representatives told us that fittings between connection points are generally standardized but that capping stacks can be modified with different fittings to allow for proper installation. Once a capping stack is installed, depending on the scenario, the containment plans typically call for slowly closing each of the outlets until the well is closed. In some cases, operators may also be able to deploy dispersants into or around the capping stack to help break up the oil (see fig. 3). Capping stacks are a primary component of HWCG and MWCC well containment capabilities, but Interior officials told us that it does not require operators to use any particular technology in the Gulf of Mexico. Instead, Interior expects operators to demonstrate that they have the capability to control a well with a capping stack or some other functionally equivalent technology, and according to Interior officials, capping stacks have demonstrated that capability. Industry representatives told us that capping stacks are tested periodically and physically located near staging points around the Gulf of Mexico, where they can be moved offshore rapidly. When not needed, they are stored onshore and not used for any other purpose. Representatives from both HWCG and MWCC told us that both of their organizations have multiple capping stacks ready for deployment. In addition to capping stacks, well containment capabilities rely on other equipment and services to transport the capping stack to the well site, assist with debris removal, and collect oil and gas from the capping stack.gas from the capping stack or other collection devices, vessels with lift and hoist capabilities to position and move the capping stack, control ships with electronics to monitor the pressures and status of the capping stack, and remotely operated vehicles that perform a variety of subsea operations. In their well containment plans, operators provide Interior with a list of this other equipment as well as how it could be utilized and positioned at the blowout location. Following the Deepwater Horizon incident, Interior issued new guidance that identified information operators are to provide to demonstrate well containment capability, but Interior has not fully documented its internal process for reviewing this information. Also, the well containment plans that operators submit to Interior as part of the permitting process identify equipment, such as a capping stack, that would be needed for well containment response, but Interior has not yet documented its process for ensuring that this equipment is regularly inspected and available. Finally, while Interior has conducted two unannounced spill drills that incorporated scenarios for well containment, it has not documented a time frame for incorporating these tests in the future. Interior issued guidance following the Deepwater Horizon incident that identifies information that operators are to provide to demonstrate they can respond adequately and promptly to a blowout or other loss of well control, but the agency has not documented its process for reviewing the information it receives. A Notice to Lessees and Operators issued on November 8, 2010, after Deepwater Horizon, identifies specific information, including types of well containment equipment accessible to the operator in the event of a spill, that operators are to provide to Interior to ensure that operators’ spill response plans are adequate. Interior issued subsequent supplemental guidance on the November 2010 notice on December 13, 2010, explaining that it would review this information as part of the application for permit to drill approval process. For example, operators are to provide information describing their plans to use capping stacks; containment domes; subsea utility equipment, including hydraulic power, hydrate control, and dispersant systems; riser systems; remotely operated underwater vehicles; and oil collection vessels. Operators may satisfy these new information requirements by submitting a well containment plan as part of their oil spill response plans. Interior officials told us that they discuss their expectations for the contents of these plans with individual operators but noted that the agency has not finalized documentation of these expectations or completed the documentation of its internal process for reviewing these plans. Interior officials who review these plans have developed a one- page checklist outlining the types of information they review, but the checklist does not provide criteria for assessing the information. For example, the checklist asks “Does the plan adequately address debris removal?” but does not provide criteria for determining whether the information the operator included is adequate. These officials said they are in the process of documenting their review process and expect to have the documentation in place by spring 2012. Under the Standards for Internal Control in the Federal Government, federal agencies are to employ control activities, such as to clearly document internal control in management directives, administrative policies, or operating manuals, and the documentation is to be readily available for examination. Interior officials told us that the agency plans to transfer the responsibility for reviewing well containment plans to another office in 2012, and will document the review process before that time. In the meantime, Interior officials told us that they rely on the expertise and judgment of staff to perform these reviews and communicate their expectations to operators. Until the agency completes a documented review process, Interior cannot provide reasonable assurance that well containment plans will be reviewed consistently. In addition to the new guidance issued since the Deepwater Horizon incident, as part of Interior’s review of well containment plans, Interior and operators use a new software tool to analyze a proposed well’s design and its ability to withstand increased pressures that result when an uncontrolled well is closed by a capping stack. In certain situations, capping the well shut could cause portions of the well to burst, potentially allowing oil and gas to flow up through the seabed and releasing oil and gas into the sea from outside the wellbore. This new software tool, called the well containment screening tool, helps Interior and operators evaluate whether a well could be closed using a capping stack and still maintain wellbore integrity. The development of this screening tool was initiated by Interior and completed with input from the oil industry. The screening tool analyzes potential well integrity and risk based on various factors including well design, geological characteristics, reservoir pressures, and wellbore fluid gradients. Interior provides the tool to operators and then reviews each operator’s analysis of expected wellbore integrity; following this review, Interior may advise operators to adjust screening tool parameters when appropriate. According to Interior officials, on at least 12 occasions, an operator strengthened its wellbore design based on the results of the screening tool. Industry representatives we met with also said that the screening tool was valuable for helping address the risks associated with a subsea blowout by requiring operators to document their well design decisions and have those decisions reviewed by Interior. According to Interior officials, in some cases, Interior may have access to a wider set of data on the geological characteristics of the area than the operator. In these cases, Interior can advise the operator on the need to modify its well design. As previously stated in this report, since the Deepwater Horizon incident, the well containment plans that operators submit to Interior as part of the permitting process identify equipment that the operator plans to use to contain subsea well blowouts. However, Interior does not have a fully documented process and associated schedule to ensure that the equipment is regularly inspected and available for deployment. Interior regulations that have been in place since before the Deepwater Horizon spill specify that operators are to submit an oil spill response plan that identifies procedures the operator is to follow in the event of a spill, including methods to ensure the availability of oil spill response equipment and an inventory of this equipment and the operator’s procedures for conducting monthly inspections. Interior is also required to conduct inspections of offshore facilities and response equipment. Interior has scheduled inspections of surface response equipment but has not scheduled regular inspections of well containment equipment. Interior officials we met with told us that they have observed officials from the well containment organizations conducting certain tests of all capping stacks identified in well containment plans approved for the Gulf of Mexico. These tests include pressure tests to ensure that the capping stack can withstand well pressures, and functional tests to ensure that components operate properly. However, Interior officials told us that the agency does not have a regular schedule for inspecting such equipment and does not specify what tests should be conducted to ensure that the equipment is in operational condition. The officials added that by June 2012 Interior plans to have a process that (1) establishes a schedule for testing equipment and (2) identifies the tests that will be conducted as part of the agency’s oversight of operator readiness to respond to a subsea event. While Interior does not have a documented process for monitoring the availability of equipment that operators identify in their well containment plans, this concern is somewhat mitigated by the number of vessels and capping stacks located in the Gulf of Mexico that could aid a well containment response in the event that dedicated equipment is unavailable. Interior officials told us that they expect well containment plans to list multiple replacement vessels and equipment to demonstrate this redundancy, and these officials believe this sufficiently mitigates the possibility that resources could be unavailable in the event of a subsea blowout. In addition, Interior relies on operators to inform them when well containment equipment is unavailable, and industry representatives told us that the two well containment organizations are to inform their members and Interior when critical equipment is out of service. In addition, Interior has not determined the extent to which it will conduct drills to test operators’ abilities to respond to a subsea well blowout. Interior’s regulations provide for periodic unannounced drills to test the spill response preparedness of operators, but Interior has not set a time frame for incorporating well containment scenarios into these exercises that would test operators’ abilities to implement their well containment plans. Interior conducts these drills to, among other things, test an operator’s ability to notify the appropriate entities and personnel in the event of a spill, including federal regulatory agencies, affected state and local agencies, internal response coordinators, and response contractors, and to take appropriate action to implement the operator’s response plan. If the decisions made during the drill do not align with the approved oil spill response plan, the drill provides an opportunity to determine what needs to change in the response process. In September 2011, Interior conducted its first unannounced spill drill that included a subsea well containment scenario, and held a second unannounced drill in December 2011. According to Interior officials, the agency plans to incorporate subsea well containment scenarios in certain future unannounced spill drills with operators. According to these officials, Interior staff have observed well containment exercises conducted by the two well containment organizations in the Gulf of Mexico. However, Interior has not tested most operators’ ability to respond to a subsea blowout, and has not established a time frame to incorporate these tests into unannounced spill drills. Until Interior sets a time frame for incorporating well containment scenarios into unannounced spill drills, there is limited assurance that operators are prepared to respond to a subsea blowout. Subsea well containment capabilities similar to what industry offers for the Gulf of Mexico could generally be used in other federal waters, including the outer continental shelf off Alaska. Industry officials said that they are developing a well containment response capability for use in this region. Moreover, operators of subsea wells off the Alaskan coast are likely to face operating conditions that pose different environmental and logistical risks than those faced in the Gulf of Mexico and may require modified blowout response plans. According to industry representatives and Interior officials we spoke with, capping stacks and other equipment available to respond to blowouts in the Gulf of Mexico could be used in other federal waters. For example, because capping stacks are installed on top of the wellhead or blowout preventer, they are not affected by the condition of the seafloor, so they could be used in other regions. Industry representatives explained that the connection points between subsea devices like wellheads, blowout preventers, and capping stacks are mostly standardized and that these connections can be exchanged on a capping stack to ensure a proper fit.officials and industry representatives told us that capping stacks developed for use in the Arctic would not need to manage the same pressures as capping stacks developed for use in the Gulf of Mexico because reservoir pressures in the Gulf of Mexico are generally much higher. For the past two decades, the majority of subsea oil and gas exploration and production in U.S. waters has occurred in the Gulf of Mexico; however, in 2010, Shell Oil submitted plans to Interior to drill in the waters north of Alaska as early as the summer of 2012. In August 2011, Interior conditionally gave approval to Shell Oil to drill exploratory wells along the north shore of Alaska in the Beaufort Sea, pending receipt and approval of Shell’s well containment plans and other requirements. In December 2011, Interior conditionally gave approval to Shell to drill in the Chukchi Sea, again pending receipt and approval of Shell’s well containment plans If Shell submits these materials to Interior and and other requirements.Interior approves them, Shell could begin drilling in these areas early as the summer of 2012. Figure 4 illustrates the location of the Beaufort and Chukchi Seas relative to Alaska and the Arctic Circle. According to Shell representatives, the company is still developing the capabilities that it will need to support well containment operations in the Beaufort and Chukchi Seas. These capabilities are to include a capping stack similar in design and functionality to capping stacks already inspected and approved for use in the Gulf of Mexico.stack has been specifically designed for use in Arctic waters and, Shell’s capping according to Shell representatives, is expected to be completed by April 2012. According to Shell representatives we spoke with, Shell is to have dedicated capping and containment capabilities at sea and ready for deployment. In the event of a subsea well blowout, Shell will deploy a capping stack as its primary response. The capping stack is to be housed on an icebreaking vessel supporting drilling operations in the Beaufort Sea, according to the Shell representatives. The icebreaking vessel is to have the lifting capability to deploy the stack onto an uncontrolled well. Shell representatives said that if a blowout occurred on a well in the Chukchi Sea, operations in the Beaufort Sea would be shut down and the icebreaking vessel with the capping stack and other supporting vessels would be deployed from the Beaufort Sea to the Chukchi Sea. Likewise, in the event of a well blowout in the Beaufort Sea, Shell would cease operations in the Chukchi Sea and send support vessels to assist operations in the Beaufort Sea. Subsea drilling operations in Alaska will face operating conditions that greatly differ from those in the Gulf of Mexico and may pose unique risks. For example, the Beaufort and Chukchi Seas are inside the Arctic Circle, with cold and icy conditions for much of the year and with few daylight hours during the winter. Interior and Coast Guard officials said that a well containment response in Alaskan waters might face certain risks that could delay or impede a response to a blowout. For example, if a blowout were to occur at the end of the drilling season in late October, surface ice and temperatures could pose risks to a well containment response. Even with Shell’s plans to have dedicated capping stack and well containment capabilities in the region to provide rapid response in the event of a blowout, these dedicated capabilities do not completely mitigate some of the environmental and logistical risks associated with the remoteness and environment of the region. Environmental challenges include the following: Surface ice. According to Interior officials, Shell proposes to drill from July 15 through October 31, except for a period in late August to allow for whale hunting by the indigenous population. A regional drilling expert told us that if a blowout occurred late in the season, icy conditions in November and December could make well containment challenging. Shell plans to maintain an icebreaking vessel at each drilling site to conduct ice management operations, but these conditions could still pose a challenge to well containment response. Ice scouring. In addition to ice that can accumulate on the surface of the ocean, in shallow waters, floating ice can scrape along the surface of the seafloor. This has the potential to damage the wellhead and blowout preventer, as well as other well containment equipment on the seafloor. Shell representatives told us that Shell will place the wellhead and blowout preventer in a hole on the seafloor to prevent damage from ice scouring. However, this does not eliminate the possibility that the capping stack or other equipment placed on or above the seafloor, such as dispersant systems or risers, could be obstructed or damaged by floating ice. Logistical challenges include the following: Limited infrastructure. Shell officials told us that they will have self- sufficient, dedicated subsea well containment capabilities situated on vessels in the Arctic seas during drilling operations. Nonetheless, these officials told us that additional personnel would be needed to respond to a subsea well blowout. Moving personnel to the site could delay a response, since harbors, airstrips, and hotels necessary to support personnel are limited in number and size along Alaska’s northern shore. The facilities are also generally much farther from the drilling sites than they are in the Gulf of Mexico, and harbors and airstrips have much less capacity to move and support response personnel. Lack of redundant vessels and equipment. According to Interior officials, because of the low rate of offshore production in the outer continental shelf off Alaska compared with the Gulf of Mexico, there is not an established industry in Alaska to manage subsea oil production or respond to a subsea blowout. Therefore, the availability of vessels and equipment to provide additional support to respond to a subsea well blowout may be limited. For example, we reported in October 2010 that U.S. Coast Guard infrastructure and assets for Arctic missions are limited, including by fuel capacity, distance to fuel sources, and crew rest requirements. Shell representatives told us that the company plans to have two concurrent drilling operations capable of providing mutual assistance, but there are few additional resources available in the region to respond in the event that Shell’s capabilities are insufficient. Because Interior has not seen or evaluated Shell’s well containment plans and other required documents, it is too early for us to evaluate Interior’s oversight of oil and gas development and well containment capabilities in Alaskan waters. However, the existence of different types of risk and the limited response infrastructure pose additional challenges Interior will have to address to conclude that it is providing sufficient oversight. Since the Deepwater Horizon incident, Interior has strengthened its oversight of the oil and gas industry’s ability to respond to a subsea well blowout, and industry has responded by improving well containment capabilities and creating dedicated well containment organizations. Interior is developing and documenting oversight processes, and in some cases has established time frames for completion. For example, while Interior has not fully documented its well containment plan review process, Interior officials told us that they expect to have documentation in place by spring 2012. Interior has also not established a regular inspection process for well containment equipment listed in well containment plans, but Interior officials told us that they are developing such a process for this equipment and plan to have it in place by June 2012. Similarly, Interior does not have a documented process for monitoring the availability of equipment identified in operators’ well containment plans, but Interior requires operators to list multiple and redundant vessels and equipment in their well containment plans, and Interior officials believe this sufficiently mitigates the risk if certain equipment is unavailable. The availability of redundant vessels and equipment found in the Gulf of Mexico does not exist in Alaska, however, and is something that Interior will need to consider as it receives and evaluates Shell’s plans to drill in waters off Alaska. Finally, Interior has conducted two unannounced spill drills that have included a subsea well containment scenario, and Interior officials told us it will incorporate these scenarios into future spill drills. However, Interior has not established a time frame for incorporating subsea well containment scenarios into spill drills and until it does so, there is limited assurance that operators drilling in the Gulf of Mexico or other areas will be prepared to respond to a subsea well blowout. To help ensure that operators are prepared to respond to a subsea blowout, we recommend that the Secretary of the Interior document a time frame for incorporating well containment response scenarios into unannounced spill drills. We provided a draft of this report to the Department of the Interior for review and comment. We received written comments from Interior’s Acting Assistant Secretary for Land and Minerals Management, which are reproduced in appendix II. The Acting Assistant Secretary concurred with our recommendation, stating that Interior agrees that well containment response scenarios that test operator responses to subsea blowouts should be a regular element in its annual plan for unannounced spill drills. The Acting Assistant Secretary also provided technical comments, which we have incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of the Interior, the appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Madhav Panwar at (202) 512-6228 or [email protected] or Frank Rusco at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made major contributions to this report are listed in appendix III. We reviewed seven well containment plans submitted by operators. Four of the plans relied on Helix Well Containment Group (HWCG) equipment and three on Marine Well Containment Company (MWCC) equipment. The equipment listed below comprises some of the key components that could be used to respond to a well blowout and were included in the well containment plans that we reviewed. A capping stack is a device that helps cap a well—called well shut-in—to bring a well under control after a blowout. It is designed to sit on top of other equipment that sits on top of a subsea well, such as the wellhead or a blowout preventer, and forms a high-pressure seal around the well. Capping stacks have a combination of gate or ram valves to block fluid flow. Capping stacks also have side outlet valves that can allow for the partial flow of oil, reducing the pressure in the wellbore, if needed. For instance, if the well containment screening tool indicates wellbore integrity will not allow for a full well shut-in, the side outlet valves will be used to direct flow to the surface capture vessels as necessary. In this scenario, a procedure known as “flow and capture” could be used to partially flow the oil to a surface vessel for processing and transport. The technical features of all capping stacks designed and developed for the Gulf of Mexico are similar and made from pre-engineered components commonly used in the oil industry. Capping stacks typically feature multiple rams for redundancy. Some stacks are rated for well pressures up to 15,000 pounds per square inch (psig) and can operate in water depths of up to 10,000 feet. Capping stacks vary somewhat in height but are generally about 30 feet tall and weigh approximately 100 tons. In response to Deepwater Horizon, the global oil industry formed initiatives and advisory groups to design and develop capping stacks. Capping stacks that are ready for deployment include the following: Other containment devices, including top hats, caissons, and cofferdams, are used to collect or contain the flow of oil from the wellhead when a capping stack cannot be connected, such as when a secure seal cannot be achieved. A top hat provides a low-pressure seal and allows for a limited collection of oil. A top hat is typically a temporary measure used while the operator is evaluating or preparing alternative options. A caisson creates a soft seal with the seabed—similar to a top hat––by covering the damaged blowout preventer. A cofferdam provides no seal to the seabed or the damaged blowout preventer. A hydraulic control system is used to operate subsurface equipment, such as to close rams on a capping stack. Debris removal equipment, such as shears for cutting pipes and remotely operated vehicles, is used to remove debris from around a well, such as pipes that have fallen to the seafloor following a blowout. In the event of a blowout, debris may need to be removed to access the blowout preventer and riser system to install the capping stack. This equipment consists of a seafloor distribution system for injecting hydrate inhibitor chemicals and dispersants directly into the flowing oil to suppress hydrate formation and disperse the oil.needed to transport and deploy these chemicals. Risers used for well containment consist of tubing that enables access to an offshore well for emergency intervention. These riser systems allow the oil flowing through the side outlet lines of a capping stack to be routed to the surface for collection and further processing. Risers are used when operators determine that it is not safe to completely shut in the well because of the potential to compromise well integrity. In this instance, risers are attached to a capping stack, after which the well may be killed from the top––known as a top kill––by funneling mud through the risers and down into the wellbore. These vessels include oil capture vessels; support vessels, such as those needed to deploy the capping stack; remotely operated underwater vehicles; and oil storage facilities. According to MWCC and HWCG, during a flow and capture procedure, these vessels are capable of handling up to 60,000 barrels of liquid and up to 120 million standard cubic feet of gas per day. These procedures seek to successfully kill the well without compromising the well’s integrity. If the well’s analysis determines that its integrity would be intact under top kill conditions, then the top kill option is generally used. However, if the operator determines that a top kill may cause surface broaching, flow and capture is used while the operator evaluates options for drilling a relief well to permanently kill the well.well is then plugged with cement and, once the wellbore pressure indicates that the well has been killed, the capping stack and blowout preventer are removed to the surface. In addition to the contacts named above, Bill Carrigg, Assistant Director; Christine Kehr, Assistant Director; David Bennett; Antoinette Capaccio; Nirmal Chaudhary; David Messman; Alison O’Neill; and Kiki Theodoropoulos made key contributions to this report.
On April 20, 2010, an explosion and fire on board the Deepwater Horizon, an offshore drilling rig, resulted in 11 deaths and the largest oil spill in U.S. history in the Gulf of Mexico. After this event, the Department of the Interior (Interior), which oversees oil and gas operations in federal waters, suspended certain offshore drilling operations. After developing new guidance, Interior resumed approving drilling operations in the Gulf of Mexico. GAO was asked to examine (1) the industry’s improved capabilities for containing subsea wells (those on the ocean floor) in the Gulf of Mexico; (2) Interior’s oversight of subsea well containment in the Gulf of Mexico; and (3) the potential to use similar subsea well containment capabilities in other federal waters, such as those along the Alaskan coast. GAO reviewed laws, regulations, and guidance; documents from oil and gas operators; and Interior’s oversight processes. GAO also interviewed agency officials and industry representatives. Since the Deepwater Horizon incident, the oil and gas industry has improved its capabilities to respond to a subsea well blowout—the uncontrolled release of oil or gas from a well on the ocean floor—in the Gulf of Mexico. In particular, operators have formed two new not-for-profit organizations that can quickly make available well containment equipment, services, and expertise. Among the equipment that these organizations can provide are capping stacks—devices used to stop the flow of oil or gas from a well. This improved well containment response equipment consists primarily of existing technologies that have been modified to support well containment, according to industry representatives. Following the Deepwater Horizon incident, Interior strengthened its review plans and resources to contain a subsea well blowout; however, its internal oversight processes have not yet been fully documented. Interior has issued guidance to operators outlining information that must be provided to Interior to demonstrate that operators can respond to a well blowout. Interior officials said that they expect to have documentation of their process for reviewing this information in place by spring 2012. Also, Interior incorporated tests of an operator’s well containment response capabilities into two unannounced spill drills, and Interior officials told us they intend to incorporate such tests into future spill drills. However, Interior has not documented a time frame for incorporating these tests, and until it does so there is limited assurance of an operator’s ability to respond to a subsea well blowout. Subsea well containment capabilities developed for the Gulf of Mexico could generally be used elsewhere, including Alaskan waters, according to industry representatives and Interior officials. However, because other areas lack the infrastructure and equipment present in the Gulf of Mexico, well blowout response capabilities are more limited. Two operators have submitted plans to Interior to drill in waters north of Alaska as early as the summer of 2012. They are developing, but have not submitted, final well containment plans to Interior, and these plans will need to be approved by Interior before drilling. Oil and gas exploration and production off the coast of Alaska is likely to encounter environmental and logistical risks that differ from those in the Gulf of Mexico because of the region’s cold and icy conditions—factors that would also likely affect the response to a well blowout. To help ensure that operators can respond effectively to a subsea well blowout, GAO recommends that Interior document a time frame for incorporating well containment response scenarios into unannounced spill drills. In commenting on a draft of this report, Interior concurred with GAO’s recommendation. or Frank Rusco at (202) 512-3841 or [email protected] .
The SWT program is a component of State’s Exchange Visitor Program, known as the J-1 Visa Program, which was established to implement the purposes of the Fulbright-Hays Act, including strengthening ties and increasing mutual understanding between the people of the United States and the people of other countries through educational and cultural exchanges. The SWT program—one of the largest U.S. exchange programs—offers young people who might otherwise lack the means to visit the United States a unique opportunity to spend up to 4 months in this country while working to defray program costs. To administer the SWT program, State works in partnership with private sector sponsors, who may contract with overseas and domestic agents to handle various administrative functions. Over time, State has identified concerns about the SWT program, such as abuses of some SWT participants by employers, links between some SWT participants and organized crime, and overshadowing of the program’s cultural exchange component by its work component. Participants. To qualify for the program, exchange visitors under the SWT program must meet certain criteria, including being full-time college or university students. Participants from most countries are required to be preplaced in a job before entering the United States. While in the United States, SWT participants generally work in low-wage service positions such as amusement park ride operator, cashier, lifeguard, resort worker, restaurant worker, or retail sales assistant. Participants come from all over the world and work throughout the United States year-round, based on the timing of their major academic breaks. For example, State records show that in 2014, approximately 79,000 SWT participants from more than 120 countries worked in all 50 states and the District of Columbia. Figure 1 shows SWT participants’ numbers, countries of origin, and U.S. job locations from January to mid-November 2014. Sponsors. SWT sponsors are U.S. organizations that bring participants to the United States and facilitate their employment and cultural exchange. To be eligible for designation as a sponsor, an organization must demonstrate to State’s satisfaction its ability to comply, and remain in continual compliance, with all program requirements and to maintain certain financial obligations and responsibilities as an SWT sponsor. State’s records show that as of November 2014, the SWT program had 41 active sponsors. Sponsors are the participants’ primary point of contact and are responsible for addressing issues that affect the health, safety, and welfare of participants while they are in the United States. Other aspects of sponsors’ roles and responsibilities include the following: Sponsors may recruit prospective SWT participants directly or through overseas agents. Sponsors are required to enter selected participants’ data in the Student and Exchange Visitor Information System (SEVIS). Sponsors are responsible for ensuring that participants have employment in the United States and are required to vet potential employers and their job offers by contacting each employer directly and verifying the business owner’s or manager’s contact information and location as well as the company’s line of business. Before participants arrive in the United States, sponsors are required to provide an orientation to prepare them for life in the United States as well as information about what to do in an emergency. After participants arrive in the United States, sponsors are required to monitor participants’ health, safety, and welfare and ensure that participants receive exposure to U.S. culture. Overseas and domestic agents. Sponsors may contract with overseas and domestic agents to carry out certain SWT program functions. Overseas agents generally assist sponsors with recruiting participants for the SWT program, assessing participants’ qualifications, and identifying U.S. job placements for participants. Domestic agents assist sponsors with functions such as arranging cultural activities for participants, helping participants find housing, identifying participant job placements, and providing participants with transportation. Employers. SWT employers are businesses in the United States that hire SWT participants for positions that the sponsors confirm are seasonal or temporary and do not displace U.S. workers. SWT employers have typically included hotels, restaurants, retail stores, ski resorts, and theme parks, among others. State Department. State’s Bureau of Educational and Cultural Affairs administers the SWT program through the Office of Private Sector Exchange. In addition, State’s Bureau of Consular Affairs adjudicates visas overseas and conducts domestic monitoring visits to SWT participants. State’s Kentucky Consular Center, established to assist in administering certain classes of visas, also provides a secondary check on employers of SWT participants. For more information about individual State offices’ roles and responsibilities related to the SWT program, see appendix II. As a private sector exchange program, the SWT program is primarily funded through fees paid by SWT sponsors and participants. For example: Participants pay sponsors or overseas agents program fees for services such as assisting with job preplacement and acquiring airline tickets and insurance. Participants also pay a visa application fee to State. Sponsors pay State a fee when they apply for designation or redesignation as SWT sponsors. In addition, sponsors that hire domestic agents may pay the agents for their services. Overseas agents pay sponsors a portion of participants’ program fees. Figure 3 shows the flow of funds among the primary entities in the SWT program. Concerns about the SWT program have related primarily to abuses of SWT participants by employers, links between SWT participants and organized crime, and an overshadowing of the program’s cultural exchange component by its work component. Abuses of SWT participants by employers. In 2010, Consular sections overseas raised concerns about the exploitation of some SWT participants and fraud relating to employment offers in the United States. In 2011, print and broadcast media reported further exploitation and abuses of some SWT participants. For example, SWT participants working full-time for a certain employer were reported to be earning only $160 to $560 per month while paying $395—twice the market rate—for company housing. State also acknowledged an increase in SWT-related complaints regarding, for example, improper or unsafe job placements, fraudulent job offers, post-arrival job cancellations, inappropriate work hours, and problems regarding housing and transportation. Links between participants and organized crime. Beginning in early 2010, law enforcement agencies identified an emerging relationship between organized criminal activity and some SWT participants, who were at risk of being recruited to organized crime because they were eligible for Social Security numbers. U.S. law enforcement investigations revealed that while some SWT participants may have been misled into criminal activities, other participants willingly and deliberately engaged in activities such as tax fraud, health care fraud, and illicit money transfer schemes. In 2012, State also reported that criminal organizations had involved SWT participants in incidents relating to illegal transfer of cash, the creation of fraudulent businesses, and violations of immigration law. Overshadowing of cultural exchange component. In 2012, State reported that in recent years, the work component of SWT had often overshadowed the core cultural exchange component of the program necessary for the SWT program to be consistent with the intent stated by the Fulbright-Hays Act. State attributed this imbalance to the attitudes of some sponsors, employers, and participants. For example, State noted that many participants viewed the program primarily as an opportunity to work in different jobs and to earn more money than they would at home. To better protect the SWT program from misuse and participants from abuse, State amended federal requirements in 2011 and 2012, imposing tighter restrictions on participants and sponsors. State also capped the size of the program to maintain it at a manageable size while State addresses the identified concerns. Additionally, State expanded its internal requirements and guidance for visa adjudication, with a goal of better ensuring participants’ health, safety, and welfare after they arrive in the United States. State issued interim final rules (IFR) in 2011 and 2012 that amended the federal regulation for the SWT program. 2011 Interim Final Rule. In 2011, State issued an IFR amending the SWT regulation, which went into effect in July 2011. These changes, which expanded on a pilot program that State had announced for the 2011 summer season, were intended to impose tighter controls and restrictions on all SWT participants and sponsors. The 2011 IFR specified the following, among other things. All applicants from countries that do not participate in the Visa Waiver Program must have prearranged jobs before entering the United States. Sponsors must vet all potential host employers to confirm that they are ongoing and viable business entities; must fully vet all job offers, including verifying the terms and conditions of such employment; and must not place participants in any prohibited position, such as adult entertainment or domestic help. Sponsors must fully vet all overseas agents whom they engage to assist in functions such as screening, selecting, and monitoring participants. Sponsors must contact participants on a monthly basis to monitor their welfare and physical location. 2012 Interim Final Rule. In May 2012, State published its 2012 IFR, which, according to State, was intended to expand on the 2011 changes to further protect the health, safety, and welfare of SWT participants. Most of the regulatory changes announced in the 2012 IFR took effect in May, and the changes remained in effect as of December 2014. The 2012 IFR explained that the 2012 regulatory changes included, among others, increased language requirements for participants, requiring them to have sufficient English not only to perform their jobs, as previously required, but also to protect themselves as they navigate daily life; an expanded list of job placement requirements and prohibitions—for example, placements must be seasonal or temporary, must provide opportunities for participants to interact regularly with U.S. citizens, and must not displace U.S. citizen workers; and requirements that sponsors submit annual participant price lists to provide itemized breakdowns of costs that participants must pay to both overseas agents and sponsors; assist participants in arranging housing and transportation when needed; and vet domestic agents. In November 2011, given continuing reported problems in the SWT program, State issued a notice in the Federal Register that capped the maximum number of participants and imposed a moratorium on designation of new sponsors. State noted that it intended to strengthen and expand its oversight, consult more closely with key stakeholders, and develop new program regulations, among other things, while the restrictions were in place. According to the 2012 IFR, the restriction on program size would remain until State was confident that the program regulations were sufficient to remedy identified concerns. State officials told us that as of October 2014, State had no current plans for lifting the program cap. As a result of State’s 2011 restrictions, the number of program participants dropped by about 20 percent from 2011 to 2013. State capped at 2011 levels the participants allotted to each sponsor, effectively restricting the total number of participants to 109,000. State’s moratorium on designating new sponsors resulted in further reductions in the actual number of participants. Figure 4 shows the numbers of SWT program participants and sponsors in 2011 through 2014. In addition to increasing the federal requirements for the SWT program, in 2012 State updated its Foreign Affairs Manual requirements for visa adjudication to help safeguard SWT participants’ health, safety, and welfare. State’s 2012 revision of the Foreign Affairs Manual sections providing guidance for the SWT program indicate the following: Consular officers can deny visas to SWT applicants who do not demonstrate sufficient English proficiency to enable them to, for example, interact effectively with law enforcement authorities and medical personnel, read rental agreements, and carry on non-work- related conversations. At all five of the posts we visited, we observed State officials conducting SWT visa interviews in English and verifying that participants had sufficient English proficiency to participate in the program. Consular officers told us that they try to ensure that applicants can be understood in the United States and can communicate with U.S. law enforcement officers about their own health, safety, and welfare. Consular officers must confirm that applicants understand a pamphlet specifying their legal rights with regard to federal immigration, labor, and employment laws in the United States. At each post that we visited, we observed consular officers asking applicants questions to confirm that they understood these rights. State has several mechanisms for monitoring and enforcing compliance with SWT regulations intended to prevent abuse of the program and of participants. State reviews sponsors’ compliance with SWT regulations and may sanction sponsors found to be in violation. In addition, sponsors vet potential employers and jobs and State conducts a secondary check of these employers, gathering information that it uses to help ensure participants’ health, safety, and welfare; sponsors also vet their overseas and domestic agents. State oversees participants through field site reviews and through complaints and incident reports. However, State has not ensured that the annual lists of participant fees that it requires sponsors to provide are complete, consistent, and publicly available. As a result, State has limited ability to protect participants from excessive and unexpected costs that could negate their otherwise positive experiences. State monitors SWT sponsors’ compliance with program regulations through biennial redesignation reviews and through on-site reviews and compliance reviews. If it determines that a sponsor has violated program regulations, State may impose sanctions. Redesignation reviews. Every 2 years, State reviews each sponsor’s application for redesignation, checking the sponsor’s record of compliance with certain program regulations as well as its ability to meet the financial obligations and responsibilities involved in SWT sponsorship. For example, during the redesignation reviews, State checks sponsors’ annual reports; their financial records, including external audits; their SEVIS records; and any recorded incidents or complaints involving the sponsor. If a redesignation review identifies concerns about a sponsor’s compliance with program regulations, State may recommend sanctions. According to a State official, State conducted 57 redesignation reviews from 2011 through 2014. On-site reviews. State’s on-site reviews of sponsors involve a comprehensive assessment of the sponsors’ compliance with program regulations. State conducts each review at the sponsor’s place of business, examining the sponsor’s records and holding discussions with the sponsor. To document the reviews, State officials complete a questionnaire covering topics such as verification of participants’ jobs and host employers, participant monitoring, complaints analysis, maintenance of SEVIS, use of partners, internal controls and training, and supervision of staff. State officials told us that State may sanction sponsors when on-site reviews identify noncompliance with SWT regulations and that State also uses on-site review results to develop recommendations to strengthen or make regulatory changes to the program. In 2011 through 2014, State conducted on-site reviews of 18 sponsors that were active during this period. Compliance reviews. State conducts compliance reviews when it believes that sponsors may have grossly violated SWT regulations. In contrast to on-site reviews, compliance reviews are narrowly focused on specific concerns, such as the sponsor’s administration of job placements or ongoing complaints against a sponsor. State completed 2 SWT compliance reviews in 2011 and 2013, respectively, and as of March 2014 was conducting 6 additional SWT compliance reviews, according to State officials. State imposes lesser or greater sanctions on sponsors depending on the nature of misconduct. According to a State document, State may impose any or all of four lesser sanctions if it believes a sponsor can be rehabilitated: (1) a letter of reprimand, (2) probation for 1 or 2 years, (3) a corrective action plan, and (4) a reduction of up to 15 percent in the number of participants allotted to the sponsor. If sponsors are involved in more egregious violations of the regulations, State can impose three greater sanctions: (1) suspension for a maximum of 120 days, (2) denial of redesignation, or (3) revocation of designation. As of November 2014, 5 of the on-site reviews and both compliance reviews that State conducted from 2011 through 2014 had resulted in sanctions of the sponsors reviewed. Table 1 provides information about State’s sanctions of seven of these sponsors. To ensure that SWT participants work only for suitable employers in legitimate jobs, sponsors are to vet potential employers and jobs before the participants begin to work in preplaced positions. State also verifies most employers through the Kentucky Consular Center (KCC), although this secondary verification is not required by SWT regulations. State documents show that sponsors vet employers by, among other things, contacting each employer, obtaining a copy of the employer’s business license and employer identification number, verifying that the employer has a worker’s compensation insurance policy or the state’s equivalent, and confirming that the employer has not experienced layoffs in the past 120 days. Sponsors place participants in jobs that are either seasonal (i.e., tied to the time of year by an event or pattern and requiring labor levels above and beyond existing worker levels) or temporary (i.e., to be performed as a one-time occurrence or to meet a peak load need or an intermittent need). In addition, sponsors maintain files documenting employer verification. All of the sponsors we spoke with said that they had conducted the required employer and job vetting, and all 15 of the on-site reviews that we examined included checks to ensure that sponsors were conducting the required vetting. State uses KCC to conduct a secondary verification of employers to ensure employers’ legitimacy and participants’ safety. KCC verifies employers of participants from non-visa-waiver countries as well as participants from visa waiver countries in preplaced jobs. Complementary to sponsors’ vetting, KCC’s verification focuses on potential concerns such as law enforcement issues; criminal records; and financial problems that could jeopardize participants’ health, safety, or welfare. State officials said that in addition to contacting SWT employers directly, KCC staff use a variety of databases as well as other public sources, such as the Internet and social media, to vet the employers. When KCC’s verification identifies a concern, State may recommend that a sponsor conduct additional monitoring of the participant or move the participant to a different employer or housing location. In 2014, as of September, State had recommended that sponsors move at least 47 participants involving 26 employers. SWT sponsors vet potential overseas agents that assist them with core programmatic functions such as participant screening, selection, and orientation, according to State documents. Our review of State’s on-site review records found that the sponsors examined, among other things, the agents’ proof of business licensing, disclosure of any previous bankruptcy or pending legal action, and criminal background check reports for owners and officers of the organization. Sponsors also report to State all active overseas agents’ names, addresses, and contact information, which State compiles in a list that it shares with bureaus and with posts to assist them in scheduling visa adjudications. State may remove an overseas agent from the list for reasons including overall problems or derogatory information, such as evidence of fraud in visa applications that the agent helped to prepare. When it removes an overseas agent from the list, State notifies all other sponsors so that they will not work with the agent. As of November 2014, State had removed two overseas agents from the list since 2010, according to State officials. In addition, sponsors ensure that domestic agents’ involved in providing orientation or opportunities for participants to engage in cultural opportunities are qualified to perform these activities and have sufficient liability insurance, if appropriate. State oversees participants’ welfare through field site reviews, which allow it to learn first-hand about participants’ experiences with the SWT program. State also oversees participants’ welfare through complaints that it receives from participants, the general public, and State officials and through incident reports from sponsors. State has conducted field site reviews since 2012, in part to strengthen its oversight of SWT participants’ health, safety, and welfare. During field site reviews, teams of State officials visit participant job sites and interview participants and employers about their experience with the SWT program. According to State officials, they typically select sites that are in areas with more than 15,000 SWT participants and that, for example, have historically high rates of complaints or have not been selected previously. The State officials explained that at the end of each season, they communicate the review findings to the participants’ sponsors and may meet with sponsors individually to discuss any problems identified. In the summer of 2014, field site review teams conducted interviews in 33 states and the District of Columbia. According to a November 2014 report summarizing field site review findings for the 2014 summer season, State interviewed 2,505, or approximately 3 percent, of the more than 79,000 participants that year. Of those participants, about 90 percent indicated that they were satisfied with their program experience and were pleased with their sponsors. According to the report, housing was a primary concern among participants—specifically, lack of sponsor support in finding housing and a shortage of suitable and affordable housing. The report also states that many participants expressed concerns about program expenses and high program fees. During field site reviews, State also verifies the accuracy of SEVIS data, which State uses to locate participants at their places of employment and their homes, according to State officials. State guidance requires sponsors to keep these data current, in accordance with federal law. If State finds gaps or notices errors in the SEVIS data or identifies trends of concern, State may provide feedback and may issue a letter of concern to the sponsor. Since 2009, State has provided oversight of SWT participants’ welfare through complaints and incident reports, which State receives through e- mail, the U.S. mail, telephone, and a toll-free hotline. State may also receive complaints from participants or employers during its field site reviews. State established standard operating procedures for addressing complaints and incident reports related to the SWT program and, in March 2014, implemented a new database for tracking them. Complaints. State’s standard operating procedures define a complaint as any expression of concern about a participant or a sponsor’s actions from any source other than the sponsor. According to State’s procedures, complaints have been received from, among others, former and current participants, employers, overseas and domestic agents, Congress, members of the media, and participant family members. The procedures note that in general, State is to refer complaints to sponsors and resolve the complaints in coordination with the complainant (for complaints) and the sponsor. The procedures list 40 complaint categories for the SWT program, ranging from “accident” to “insufficient funds” to “workplace discrimination.” State’s procedures lay out criteria and steps for referring serious complaints to senior State officials. State may provide information about sponsorship best practices to help the sponsor improve its program. If a complaint about a sponsor identifies serious regulatory concerns and potential violations, State may draft a letter of concern as part of its process for closing out the complaint; State may also sanction the sponsor, depending on the degree of misconduct. Figure 5 shows State’s process for addressing complaints. Incident reports. Sponsors are required to promptly inform State of any serious problem or controversy that could be expected to bring State or the SWT program into notoriety or disrepute. Sponsors submit this information in incident reports. According to State, most incident reports involve matters such as deaths, accidents, crimes or arrests, medical issues, sexual abuse, or missing persons. State’s process for handling incident reports is similar to its process for handling complaints, and State uses the same procedures to escalate incidents as it uses to escalate complaints. State logs and maintains information about each incident report, using the same database that it uses for registering complaints. According to State’s procedures, State maintains contact with the sponsor regarding the incident until it is resolved. For example, if the incident report involved the death of an SWT participant, State would follow up with the sponsor until the participant’s body was repatriated, or if a participant was hospitalized, until the participant was released from the hospital. If a sponsor fails to submit an incident report about a serious problem or controversy, State notes the failure as a potential regulatory violation and may sanction the sponsor. State provided data showing that in 2013, State received relatively few complaints and incidents—592 and 143, respectively—given the approximately 86,500 participants in the SWT program that year. Examples of the 2013 complaints range from participant problems with pay or living conditions to lack of response from sponsors. Examples of incidents in 2013 include participants’ arrests for theft, involvement in car and bicycle accidents, and deaths. According to State, approximately 38 percent of 2013 complaints and incident reports resulted from field site reviews, 22 percent were called in by sponsors, 20 percent came into State’s hotline, 14 percent were sent to State’s e-mail in-box, and 7 percent came from unspecified sources. State’s procedures do not specify response time frames but require that all complaints and incident reports be handled quickly and efficiently. While State does not track the time between receipt of a complaint or incident report and State’s initial response, State officials said that a response is generally initiated within 1 or 2 days. According to State data, in 2013, an average of 46 days elapsed between receipt and closeout of a complaint and an average of 73 days elapsed between receipt and closeout of an incident report. State officials told us that the time between receipt and closeout of a complaint or incident report may include, for example, delays in receiving police and hospital reports. Although State requires SWT sponsors to annually submit participant price lists itemizing fees that participants pay to sponsors and overseas agents, State does not have a mechanism for ensuring that the sponsors provide complete and consistent data. Moreover, despite a 2013 recommendation by State’s Office of Inspector General (OIG) that sponsors be required to publicly disclose all fees that participants pay them and their overseas agents, State has not established a mechanism to ensure that this information is made publicly available. According to the 2012 IFR, recent criticism of the program had included alleged exorbitant fees charged to SWT participants, and State requested the information about participant fees to protect participants, sponsors, and the integrity of the program. Because State has not established mechanisms to ensure, respectively, that sponsors provide complete and consistent lists of participant fees and that this information is made publicly available, State’s ability to protect participants from excessive and unexpected program costs is limited. The fees that participants pay vary by sponsor, agent, and services provided. According to the State OIG’s 2013 report, sponsors and overseas agents can charge participants whatever fees they deem appropriate and State sets no limits on the fees that can be charged. State officials estimated that participant fees in each country, excluding airfare, range from $1,500 to $5,000 and often include a program fee, job placement fees, and health insurance, in addition to the visa application fee and SEVIS fee, which participants generally pay to State and the Department of Homeland Security, respectively. However, according to the State OIG’s 2013 report, participants do not always have a clear sense of what the fees cover and frequently pay additional, unanticipated expenses such as interview fees and registration fees. Although State solicited information about sponsors’ and overseas agents’ fees in 2013, the data it received were incomplete and inconsistent, and State did not solicit or obtain this information in 2014. Standards for internal control in the federal government call for managers to ensure that there are adequate means of obtaining relevant, reliable, and timely information from external stakeholders that may have an impact on the agency’s ability to achieve its goals. In January 2013, State sent sponsors an approved template itemizing the fees that they were required to report as well as instructions for completing the template. However, State did not receive responses in 2013 from all sponsors, according to a senior State official. Moreover, our analysis showed that the lists of fees that State received were not complete or consistent. For example, some sponsors listed a flat program fee, others listed a range of fees with no explanation of what the ranges covered, and others provided no information about their fees. Our review of the instructions that State sent sponsors for filling out the fee template showed that the instructions did not specify the information requirements; according to two of the five sponsors we interviewed, the instructions were difficult for them and their overseas agents to follow. In 2014, State did not solicit the required data and sponsors did not provide these data. In reports published in February 2012 and September 2013, respectively, State’s OIG made recommendations related to the transparency of SWT fees. In its 2012 report, the OIG recommended that State revise its regulations to establish maximum fees that sponsors and their overseas agents can charge. In response, State reported in 2012 that it had begun meeting with sponsors to discuss ways to establish an open and transparent process for capturing fees that they and their overseas agents may charge participants. In its 2013 report, the OIG changed its 2012 recommendation to say that State should revise regulations for SWT and other private sector exchange programs to require that sponsors publicly disclose all fees that they and their overseas agents charge program participants. State officials told us that, in response to the OIG’s 2013 recommendation, State is considering regulatory changes that would require SWT sponsors to post online the fees that they and their overseas agents charge, to ensure that participants are aware of the costs of participation in the program. State officials noted that the changes being considered, as well as State’s effort to collect fee data, would allow prospective students to review costs among various sponsors, programs, and countries and would also allow State to compare fees across all sponsors and countries and identify anomalies or unusually high fees. However, State officials said in November 2014 that the regulatory changes were still under consideration, with no projected time frame for completion. Moreover, according to State officials, State has not established a mechanism for ensuring that information about sponsors’ and overseas agents’ fees is made publicly available. Standards for internal control in the federal government state that ongoing monitoring should occur in the course of operations. Without a mechanism to ensure that complete and consistent information about participant fees is made publicly available, State has limited ability to protect participants from being charged excessive and unexpected fees that might negate their otherwise positive experiences of the program. To strengthen the SWT program’s cultural exchange aspect, State has taken steps to emphasize the program’s cultural component relative to its work component, including adding cultural requirements to the program regulation. However, State officials indicated that because the requirement that sponsors provide participants opportunities for cultural activities outside the workplace does not include detailed criteria for sufficient and appropriate opportunities, State has limited ability to assess and enforce compliance. As a result, State lacks assurance that SWT participants’ experiences of U.S. culture further State’s public diplomacy goals. Since 2013, State has taken initial steps to leverage the SWT program’s long-term public diplomacy value by including a small number of SWT alumni in its broader activities involving exchange program alumni. ensure that all participants have opportunities to work alongside U.S. State reported efforts to elevate the visibility of the cultural exchange aspect of the Summer Work Travel (SWT) program. For example, State officials reported that in the summer of 2014, they travelled around the United States, meeting exchange participants and highlighting their “true American experience,” including cooking demonstrations, holiday celebrations, and community volunteering. In addition, State officials said that State has worked with community support groups around the country that have assisted with various aspects of the SWT program, including providing opportunities for participants to engage in cultural activities. citizens and interact regularly with U.S. citizens to experience U.S. culture during the workday; and ensure that all participants have opportunities to engage in cultural activities or events outside of work, by planning, initiating, and carrying out events or other activities that provide participants exposure to U.S. culture. Lifeguard Olympics for Summer Work Travel Program Participants Summer Work Travel program (SWT) alumni whom we met with overseas discussed cultural experiences while in the United States, such as going to amusement parks, attending theater performances, and visiting national parks. Additionally, the sponsors we interviewed indicated that they had provided opportunities for participants to engage in cultural activities, such the Lifeguard Olympics. This event involved swimming relays, diving competitions, and other aquatic challenges to win money for local charities. About 150 lifeguards, including Americans and SWT participants from three different sponsors, were gathered at the event in Virginia. State provided guidance for meeting the cultural component requirement, in the 2012 IFR and a guide that State sent to sponsors in 2013. For example, according to the 2012 IFR, if a participant works at an amusement park, then amusement parks are not an acceptable cultural offering. The 2012 IFR also offers examples of ways that sponsors can meet the requirement, such as activities to acquaint participants with recognized features of U.S. culture and history—for instance, national parks, historic sites, major cities, or scenic areas—or to engage participants with the communities in which they work and live. In addition, the 2013 guide offers resources for sponsors to develop cultural programming that aligns with State’s public diplomacy goal. State monitors compliance with the SWT cultural component requirement during its redesignation, on-site, compliance, and field site reviews. During redesignation reviews, State reviews sponsors’ annual reports to ensure that sponsors have provided opportunities for participants to engage in cultural exchange activities. During on-site reviews and compliance reviews, State may request copies of the sponsor’s documentation of its required monthly monitoring of participants, including records of participants’ involvement in cultural activities. During field site reviews, State asks participants about their cultural experiences. State’s summer 2014 field site review report notes that roughly 60 percent of the participants interviewed indicated that they had participated in cultural activities planned by their sponsors or employers, whereas in 2013 State reported that nearly 60 percent of participants indicated they would welcome more involvement from sponsors in arranging cultural activities. Since 2011, State has sanctioned two sponsors, in part for failing to provide any cultural programming, according to a State official and documentation that we examined. However, according to State officials, because the cultural component requirement does not include detailed criteria, State is unable to sanction sponsors for providing insufficient or inappropriate opportunities for cultural activities outside the workplace. State officials said that they can issue a sanction if a sponsor provides no cultural component. In November 2014, State officials indicated that they were considering regulatory changes that would establish grounds for sanctioning sponsors that do not provide sufficient or appropriate cultural programming; however, the officials could not tell us when a decision about these regulatory changes was expected. Without detailed criteria that would allow State to assess the sufficiency or appropriateness of the cultural component, State lacks assurance that SWT participants’ experiences of U.S. culture further State’s public diplomacy goals. State has taken initial steps to leverage the SWT program’s long-term public diplomacy value by reaching out to a limited number of SWT alumni. State engages systematically with alumni of various other cultural exchange programs with the goal of strengthening U.S. relationships with current and emerging leaders, according to State officials. State officials noted that SWT alumni are active in their communities and promote ideals and values, such as entrepreneurship and volunteerism, that they learned about in the United States. In contrast to its nascent efforts to engage with alumni of the private sector-funded SWT program, State’s engagement with alumni of most U.S.-government-funded cultural exchange programs—for example, the Fulbright, International Visitors, and Academic Exchange programs— began in 2001, according to State officials. Recognizing the potential value of maintaining connections with exchange visitor alumni, in 2005 State began developing an archive of all alumni of U.S.-government- funded exchange programs; previously, State did not maintain connections with former exchange program alumni because it lacked their contact information. State officials explained that State also developed a network that allows exchange program alumni to access networking tools, grants, career development aids, and research tools. Alumni of U.S.- government-funded exchange programs receive an automatically generated e-mail invitation to join this network. State has recently taken several steps toward including SWT in its alumni network. In January 2013, State hired a special assistant to coordinate outreach to SWT and other private sector–funded exchange program alumni. In 2013, State also began, on a pilot basis, to include a limited number of SWT alumni in State’s archive of exchange visitor alumni. State selected these SWT alumni for inclusion in the archive because of their involvement with sponsor-led leadership conferences or involvement with local grassroots alumni group at posts. In November 2014, as part of a State pilot, sponsors were asked to provide contact information for 5 percent of their 2014 SWT alumni for automatic registration in State’s exchange visitor alumni archive and to give SWT alumni access to material on the alumni affairs website. State officials noted that State will review the initial effort before expanding the pilot. In addition, State has encouraged SWT alumni to engage with one another and encouraged several posts to engage with SWT alumni. In Macedonia, State connected a grass-roots- organized SWT alumni group—the only formal SWT alumni network as of September 2014—with resources at the embassy and through its alumni affairs network. State officials reported that State was also beginning to support the embassy in Serbia to launch similar alumni groups. State noted that some embassies currently undertake outreach to SWT alumni on an ad hoc basis. For example, the U.S. Ambassador in Kazakhstan held a picnic for SWT alumni at the embassy, partly to promote the EducationUSA program in 2013, and the embassies in Bulgaria, Poland, and Hungary hosted SWT photo contests for alumni in 2013 and 2014. Among the five posts we visited, the post in Sofia, Bulgaria, had engaged in an SWT alumni activity and reported plans to assist in establishing an SWT alumni network. Some SWT sponsors in the United States also maintain alumni networks, although State does not require them to do so. Of the five sponsors we met with, three sponsors reported that they maintain SWT alumni networks. By allowing large numbers of young, educated people—approximately 79,000 in 2014—to experience life in the United States each year and return home to share their experiences, the SWT program offers the potential to strengthen U.S. relationships abroad and further U.S. public diplomacy. However, if even one participant has a harmful or abusive experience, the potential also exists for notoriety and disrepute to the program, State, and the country. Moreover, unless participants receive sufficient and appropriate exposure to U.S. culture, they will not receive the full intended benefits of the program. In the past several years, State has strengthened program requirements and expanded its oversight with the intention of better ensuring the health, safety, and welfare of SWT participants. For example, State required sponsors to ensure that all employers are legitimate and all wages paid to participants meet certain criteria and has sanctioned sponsors for violating regulations. In addition, responding to allegations of exorbitant fees charged to program participants, State required sponsors to provide annual lists of fees that participants must pay sponsors and overseas agents. Moreover, noting that participants did not always have a clear sense of what sponsors’ and overseas agents’ fees cover, State’s OIG recommended in 2013 that sponsors be required to publicly disclose all fees that they and their overseas agents charge participants. However, State has not established mechanisms for ensuring that the price lists it receives are consistent and complete and that this information is made publicly available. As a result, State has limited ability to protect participants from excessive or unexpected fees that might negate their otherwise positive experiences of the program. State has also taken steps to emphasize the SWT program’s cultural component relative to its work component and to strengthen the SWT program’s value for public diplomacy. However, a lack of detailed criteria for the cultural opportunities that sponsors are required to provide limits State’s ability to ensure that participants have sufficient and appropriate opportunities to experience American culture outside the workplace. As a result, State lacks assurance that SWT participants engage in cultural exchanges that will benefit the participants and align with its public diplomacy goals. To enhance State’s efforts to protect SWT participants from abuse and the SWT program from misuse, we recommend that the Secretary of State direct the Bureau of Education and Cultural Affairs to take the following three actions: establish a mechanism to ensure that sponsors provide complete and consistent lists of fees that participants must pay, establish a mechanism to ensure that information about these participant fees is made publicly available, and establish detailed criteria that will allow State to assess the sufficiency and appropriateness of opportunities for cultural activities outside the workplace that sponsors provide to SWT participants. State provided written comments about a draft of this report, which are reproduced in appendix III. In addition, State provided technical comments, which we incorporated as appropriate. In response to a suggestion in State’s written comments, we also adjusted the wording of our first recommendation to more clearly convey that State should identify a mechanism to ensure that sponsors’ price lists reflect all fees that participants must pay. In its written comments, State agreed with our recommendations and indicated that it is considering actions that address them. For example, responding to our first recommendation, State wrote that it is considering developing a template to facilitate program sponsors’ public release of fee and cost information in a consistent format. According to State, this will enable it to check the completeness and consistency of the price lists that sponsors are required to submit, by comparing the lists with the information that sponsors disclose to program participants. Regarding our second and third recommendations, State wrote that it is considering, respectively, additional and more-specific fee and cost transparency requirements and cross-cultural requirements for SWT sponsors. Moreover, State pointed to recent updates to its regulation governing all private sector exchange visitor programs, including the SWT program, that went into effect on January 5, 2015, and that may enhance State’s ability to provide oversight for the SWT program. However, we believe that further actions are needed to address our recommendations related to the fee and cross-cultural components of the SWT program. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to interested congressional committees and the Secretary of State. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8980 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Individuals who contributed to this report are listed in appendix IV. This report examines (1) changes to program requirements that the Department of State (State) has made since 2010 to better protect the Summer Work Travel (SWT) program and participants, (2) State’s oversight of SWT sponsors’ compliance with program regulations and of participants’ welfare, and (3) efforts State has made to strengthen the program’s cultural exchange aspect and further its broader public diplomacy goals. To address these objectives, we reviewed and analyzed rules and regulations and State documents related to the SWT program. We interviewed officials from State’s Bureau of Education and Cultural Affairs (ECA), which is responsible for administering the SWT program, and Bureau of Consular Affairs; the Department of Labor; and the Department of Homeland Security. We also interviewed a nongeneralizable sample of five U.S. sponsors in various locations—American Work Adventures, the Council on International Educational Exchange, Cultural Homestay International, Geovisions, and Intrax—which we selected on the basis of the number of participants they sponsored. We also interviewed representatives of an association representing many SWT sponsors—the Alliance for International Educational and Cultural Exchange—and one nonprofit focused in part on labor rights, the Southern Poverty Law Center. We conducted fieldwork at five posts—Dublin, Ireland; Istanbul, Turkey; Moscow, Russia; Sofia, Bulgaria; and St. Petersburg, Russia— where we met with State officials adjudicating visas for the SWT program. Our findings from these site visits are not generalizable. We selected these locations based on a variety of factors, including the number of SWT participants from each country, the post’s involvement in the 2011 pilot project, and the post’s involvement in the Visa Waiver Program. At posts, we observed the SWT visa adjudication process for the 2014 SWT summer season. We also interviewed a total of 19 overseas agents in Dublin, Istanbul, Sofia, and St. Petersburg who worked with the sponsors we interviewed, including overseas agents in Moscow whom State also interviewed. In addition, we interviewed 12 groups comprising a total of 70 previous participants, whom we selected from participants that the overseas agents recommended. We focused our review on the period between 2010 and 2014, because State took steps during this period to strengthen the SWT program; implement regulatory changes addressing program concerns; strengthen its monitoring and oversight in response to our recommendation in 2005; and strengthen the program’s cultural component. To understand the steps that State has taken since 2010 to strengthen SWT requirements, we interviewed State officials about the pilot program and ensuing changes to SWT regulations, reviewed State’s 2011 Pilot Program guidelines; 2011 and 2012 interim final rules; and relevant changes in the Federal Register, State’s Foreign Affairs Manual, State cables, and State’s Guidance Directives. We also analyzed State data about the size of the program from 2011 through 2014. To examine State’s oversight of SWT sponsors and participants, we interviewed program officials from various ECA offices responsible for monitoring and oversight of the SWT program, including the Offices of Private Sector Exchange Administration, Compliance, and Designation. We also interviewed officials of the Bureau of Consular Affairs’ Offices of Passport Services and Visa Services. We reviewed State’s standard operating procedures published between 2011 and 2014 related to implementing, monitoring, and overseeing the program. To understand how State evaluates sponsors, we analyzed documentation for 15 on-site reviews, 2 compliance reviews, and 4 letters imposing sanctions on SWT sponsors that State has completed since 2011. We also reviewed State’s process for conducting secondary employee verification through the Kentucky Consular Center to understand its changing role in the visa adjudication process and State’s process for reprogramming participants. We analyzed State’s documentation of its monitoring of sponsors through the designation process. To evaluate the quality of the fee data that State collects from sponsors and overseas agents, we obtained and reviewed data collected in 2013. We interviewed ECA officials regarding the 2013 data as well as ECA’s reason for not collecting these data in 2014. We determined that the 2013 fee data were not sufficiently reliable for reporting on the fees that overseas agents and sponsors charge participants, as discussed earlier in the report. We reviewed State’s analysis of its field site reviews for the SWT 2013 and 2014 summer and 2013 winter programs, along with the questionnaires that State officials used on the monitoring visits, to determine how State ensures the health, safety, and welfare of SWT participants and to ascertain what participants thought about the program. We did not review the methodology that State used to analyze the interview results, and we present some of State’s reported results for context only. We also observed State officials interviewing a total of 21 SWT participants and four SWT employers during field site visits conducted by ECA’s Private Sector Exchange Office of the Private Sector Exchange Administration in San Francisco, California; New York, New York; and Washington, D.C., as well as field site visits conducted by Consular Affair’s Office of Passport Services in Philadelphia, Pennsylvania. We selected these sites on the basis of the State office conducting the field site visit and the number of SWT participants in each location. Finally, we reviewed State’s process for handling complaints and incident reports and analyzed complaints and incident reports that State received from 2010 to September 2014, to understand the types of concerns reported and State’s manner of responding to them. We determined that the complaints and incident data from 2013 were sufficiently reliable for our purposes by interviewing State officials responsible for compiling and maintaining the data. We asked these officials about procedures for data entry, edit checks and controls, safeguards against inconsistent entries, procedures for following up on identified errors, among other things. To identify State’s efforts to strengthen the cultural component of the SWT program and further its broader public diplomacy goals, we reviewed the regulatory requirement that sponsors ensure that all participants have opportunities to engage in cultural activities or events outside of work by planning, initiating, and carrying out events or other activities that provide participants’ exposure to U.S. culture. We also reviewed State’s efforts to collect information on the cultural component of the program through its field site review reports and by witnessing field site reviews. To identify the extent to which the SWT program is furthering U.S. public diplomacy goals and leveraging alumni resources within ECA, we reviewed ECA’s Bureau Strategy Document for fiscal years 2015 to 2017. We also interviewed ECA officials responsible for implementing the SWT program, officials in the Office of Alumni Affairs, and the Deputy Assistant Secretary and reviewed State’s efforts to monitor the cultural component of the program. We conducted our review from November 2013 to February 2015 in accordance with generally accepted U.S. government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The following State Department (State) entities have responsibilities for administering the Summer Work Travel (SWT) program. State’s Bureau of Educational and Cultural Affairs (ECA) administers the SWT program through the Office of Private Sector Exchange, which expanded from two to four offices and nearly doubled its staff size from 2012 through 2013. Within the Office of Private Sector Exchange are four offices with responsibilities for the Summer Work Travel program: the Offices of Designation, Exchange Coordination and Compliance, Private Sector Exchange Administration, and Policy and Program Support. These offices also oversee the other exchange visitor programs that ECA administers. The Office of Designation is responsible for designating new sponsors to the SWT program and redesignating current sponsors on a biannual basis. Designated sponsors are given access to the Student and Exchange Visitor Information System (SEVIS) in order to maintain accurate information on participants throughout their stay in the United States. The Office of Designation also conducts outreach to sponsors and helps them stay compliant with regulations through timely advice and ongoing communication. The Office of Exchange Coordination and Compliance investigates concerns about sponsors, including alleged violations of the SWT regulations. It has the authority to sanction sponsors in a variety of ways and liaises with State’s Bureau of Diplomatic Security and other law enforcement agencies when appropriate. It also conducts on-site reviews of sponsors in all J-1 Visa categories to ensure that the sponsors are complying with regulations. The Office of Private Sector Exchange Administration, established in 2013, manages issues, crises, and complaints within the SWT program. It maintains a hotline and e-mail in box that participants and the public can use to make contact if issues arise, and, according to State officials, sustains contact with complainants until issues are resolved. Similarly, the office works with sponsors to resolve reported incidents. It also conducts field site reviews interviewing SWT participants at their workplaces to assess SWT participants’ overall program experiences and identify any recurring problems. The Office of Policy and Program Support, established in 2013, develops SWT regulations. According to State, the office is currently considering regulatory changes to strengthen provisions aimed at enhancing the safety, health, and welfare of exchange visitors and strengthening overall exchange visitor program experiences; establish the Office of Private Sector Exchange’s strategic focus to ensure that its 14 private sector programs are linked to its mission; ensure budgetary responsibility; and guide the private sector’s alumni expansion effort. State’s Bureau of Consular Affairs consular officers overseas adjudicate visa applications after determining whether applicants are eligible for the SWT program based on factors such as English language proficiency, intent to return to their home countries, and sufficient funds to cover expenses. In addition, the bureau’s domestic passport offices, located throughout the United States, assist ECA in conducting field site reviews. The Kentucky Consular Center (KCC) verifies, using various databases, the legitimacy of SWT employers for participants from countries that are not part of State’s Visa Waiver Program. Each day, KCC produces a report for ECA that indicates whether it identified any employers that may be of concern to the health, safety, and welfare of participants. In addition to the contact named above, Hynek Kalkus (Assistant Director), Julie Hirshen (Analyst-in-Charge), Sada Aksartova, Carly Gerbig, Etana Finkler, Reid Lowe, Alana Miller, and Anthony Moran made key contributions to this report. Other assistance was provided by Martin de Alteriis and Grace Lui.
Created under the Mutual Educational and Cultural Exchange Act of 1961, the SWT program is intended to further U.S. public diplomacy by giving foreign undergraduate students short-term opportunities to experience the people and way of life in the United States. In 2005, GAO found that State's oversight was insufficient to prevent abuse of the SWT program or its participants. Since 2010, some misuses of the program by participants and criminal organizations and abuses of participants—for example, low wages and substandard living conditions—have been reported. Also, State has noted that the program's work component has often overshadowed its cultural component. GAO was asked to report on State's oversight and implementation of the SWT program. This report examines, among other things, steps that State has taken since 2010 to strengthen program requirements as well as State's oversight of sponsors and participants. GAO reviewed program regulations and other SWT documents. GAO also interviewed U.S. officials and others involved in the program in the United States and in Bulgaria, Ireland, Turkey, and Russia, countries that GAO selected on the basis of factors such as the number of SWT participants from each country. Each year, college and university students from all over the world participate in the Department of State's (State) Summer Work Travel (SWT) program. State records show that in 2014, about 79,000 participants from more than 120 countries worked up to 4 months in jobs such as lifeguard, cashier, and resort worker throughout the United States (see map). Participants are meant to experience U.S. culture by interacting with Americans during work and through cultural activities in their free time. State administers the program in partnership with U.S. private sector sponsors that serve as participants' primary contacts. Program funding comes primarily from fees paid by participants and sponsors. State has taken several steps to strengthen SWT requirements since 2010. For example, in 2011, State began requiring sponsors to verify employers and job offers and prohibited jobs such as adult entertainment and domestic help. State also capped the number of participants at 109,000 until it could determine that it had addressed identified concerns; as of October 2014, State had no plans for lifting the cap. State made further changes in 2012, such as requiring—in response to allegations of excessive participant costs—that sponsors annually submit lists of fees that SWT participants pay them and their overseas agents. State also required sponsors to provide participants cultural opportunities outside the workplace. State oversees sponsors through both general and targeted reviews of their compliance with program requirements. State oversees participants' welfare by periodically interviewing a small number of participants and investigating complaints and reports from participants and others. However, State does not have mechanisms to ensure that sponsors submit complete and consistent lists of fees that participants pay them and their overseas agents and that this information is made publicly available. State thus has limited ability to protect participants from excessive and unexpected costs. Further, State officials told GAO that it cannot assess the sufficiency and appropriateness of participants' cultural opportunities outside the workplace because the 2012 requirement lacks detailed criteria. As a result, State cannot be assured that SWT participants' experiences of U.S. culture align with its public diplomacy goals. State should establish mechanisms to ensure that sponsors submit complete and consistent lists of participant fees and that this information is made publicly available. State should also provide detailed criteria for assessing the sufficiency and appropriateness of participants' cultural opportunities. State agreed with GAO's recommendations.
FAA is the largest operating administration in DOT with almost 46,000 employees and a budget of $15.9 billion in fiscal year 2014. FAA carries out its mission—to provide the safest, most efficient airspace system in the world—through four lines of business and 10 staff offices (offices). Headquartered in Washington, D.C., with offices across the United States and an extensive global reach, FAA operates and maintains this system, known as the National Airspace System, and oversees the safety of aircraft and operators. Concurrent with the day-to-day operation of this system, FAA is also working to transform the nation’s ground-based radar air-traffic control system to an air-traffic management system using satellite-based navigation and other advanced technology. This transformation is referred to as the Next Generation Air Transportation System (NextGen). Among other duties, FAA regulates and encourages the U.S. commercial space transportation industry by licensing all commercial launches and reentries by U.S. citizens or companies that take place in the United States and overseas. FAA also administers programs related to airport safety and inspections, and standards for airport design, construction, and operation. As mentioned above, Section 812 of the FAA Modernization and Reform Act, enacted in February 2012, mandated that FAA identify and undertake actions necessary to streamline and reform the agency. The mandate set out timelines for FAA to conduct this work and report to Congress. Specifically, FAA was to undertake its review no later than 60 days after enactment of the Act and undertake actions to address its findings no later than 120 days after enactment. Finally, FAA was to submit a report to Congress on the actions undertaken no later than 150 days after enactment. FAA’s Assistant Administrator for Finance and Management serves as the agency official for process change management and provided leadership for FAA’s Section 812 effort through the Office of Finance and Management (AFN). In April 2012, AFN held an FAA-wide kickoff meeting to discuss the Section 812 requirements. Individual offices within FAA designated a representative to work with AFN to identify initiatives. According to AFN, FAA had a number of major efficiency improvements already under way in early 2012, so AFN asked offices to identify initiatives that reflected ongoing work as well as any additional opportunities for improvement and reform. Through this process, FAA identified 36 initiatives across its offices. Figure 1 shows the number of initiatives that each office is leading. After AFN and other FAA offices identified these 36 initiatives, a point of contact was identified for each initiative. Each point of contact was either a program manager for an initiative or a selected person within an office with access to information on the initiative. AFN collected information from the points of contact on the (1) problem statement, (2) proposed solution, (3) expected benefits, and (4) status of each initiative to create FAA’s January 2013 report to Congress. The heads of each responsible office validated the initiative information that the points of contact supplied to AFN. Beyond the Section 812 mandate, Executive Orders and efforts by the Office of Management and Budget (OMB), DOT, and FAA have encouraged process improvements and efficiency initiatives within FAA. For example: In February 2011, according to FAA officials, FAA officially launched its Foundation for Success initiative, which aims to transform certain governance, shared services, human capital, and NextGen activities to improve the management of FAA functions. According to FAA, this initiative provides a more efficient organizational and management structure for ensuring the timely, cost-effective delivery of NextGen. In November 2011, Executive Order 13589 on promoting efficient spending in the federal government required each executive-branch agency to establish a plan for reducing the combined costs associated with certain functions, such as travel, information technology, printing, and agency fleets. In December 2012, AFN created the Community of Practice for Process Improvement to support FAA process improvements. This community of practice is designed to be a collaborative environment for subject matter experts to discuss ideas and share best practices related to agency improvements and efficiencies. The benchmarks—that is, the amount of work or milestones—that FAA used to determine when each initiative was considered “implemented” differ. The process to determine the status of the initiatives was a decentralized one in which the offices responsible for leading the individual initiatives determined the status (i.e., either in-progress or implemented) when AFN requested the status of the initiatives from points of contact. The heads of each contributing office subsequently validated the information on status, and then AFN accepted and reported that status. Since each office independently determined the status of its initiative with limited direction from AFN, internal stakeholders and Congress do not have a clear, overarching picture of the status of the initiatives, including the work that FAA undertook to carry them out and how the actions addressed the mandate. As of January 2015, FAA considered 33 of the 36 initiatives implemented. However, the benchmarks FAA offices used to determine status varied, for example: One initiative was to have the Office of Human Resources audit the leadership training offered throughout the agency and identify and work to reduce redundancies to obtain efficiencies and cost savings. FAA officials said that the team leading this work considered the initiative “implemented” after a Learning Professional Guiding Coalition was created, reviewed existing courses, and completed a road map to develop a centralized series of leadership courses. According to FAA officials, one leadership course has been deployed and work continues to develop 32 leadership courses. Another initiative was to have the Office of Airports develop standard operating procedures for field operations to gain efficiencies from adopting best practices and ensuring regulations are followed. This office will consider the initiative “implemented” after it creates the 24 planned procedures and these procedures are in use by staff to issue grants, review documents, and complete other processes. See appendix I for information on the benchmarks FAA reported using to determine the status of each initiative. In addition to this variation in how offices determined the status of an initiative, many of the 36 initiatives started prior to the February 2012 passage of Section 812. Specifically, FAA started 33 of the 36 initiatives before passage of the mandate, according to FAA officials and documents. For example, 17 of the 36 initiatives were driven by or related to Foundation for Success, the Administrator’s examination of the agency’s organizational structure that officially began in 2011 to improve delivery of agency-wide services, such as information technology and budgeting, through a single, shared-services organization. Also, the Office of Aviation Safety initiative to close the London international field office began in February 2011. By contrast, FAA officials said that the Joint Resource Council initiative to review FAA’s acquisitions and investment strategy to optimize funding capital investments began in May 2012 after enactment of the FAA Modernization and Reform Act. As stated, FAA was to begin its review no later than 60 days after enactment of the Act and undertake actions from its findings no later than 120 days after enactment. To meet these deadlines, AFN sought to identify improvements that FAA had under way or had already completed, as well as additional opportunities for improvement that aligned with Section 812, according to FAA documents. AFN officials explained that the FAA had already embraced a culture of continuous improvement and that the agency had ongoing efforts that were directly in line with the objectives Congress outlined in Section 812. By the time FAA reported to Congress in January 2013, the agency categorized 15 of the 36 initiatives as complete. In addition, FAA officials leading 30 of the 36 initiatives told us that further or continuous action would be needed to realize benefits. We categorized FAA officials’ descriptions of the continuous action being taken for each initiative to realize expected benefits, even after an initiative is considered “implemented.” Table 1 describes the types of continuous actions completed or planned for the initiatives, and appendix I provides further information on the type of continuous action for each initiative. The types of continuous action ranged from being directly related to realizing an initiative to helping ensure an initiative remained in place and achieved expected benefits. The following are examples of these types: Primary actions—For an Office of Human Resources initiative to improve customer service, the office considered the initiative as “implemented” after it developed a draft agreement to establish the range of Human Resources’ services to be offered and performance targets for these services. After this determination of implementation, the office took further actions, including getting senior leadership’s approval of the agreement and using the agreement with the three offices Human Resources considered to be their major customers. Secondary/related actions—The FAA Academy led an initiative to conduct a pilot program for the use of iPads for technician and pilot training. After a year, the pilot program was completed, and FAA officials considered the initiative “implemented.” FAA officials said that the success of the initiative led the Academy to expand the use of tablet devices in classrooms and other areas where the devices could expand quality or reduce cost. Monitoring actions—One Air Traffic Organization initiative sought to align safety and technical training into a single office. Officials leading this initiative said that the Air Traffic Organization created the new, single office but would continue to take steps to improve the new office’s efficiency as needed, such as eliminating any duplicative positions. While FAA has made some progress in implementing its streamlining and reform initiatives, our past work has highlighted issues FAA has had addressing a set of recommendations and fully executing changes related to a few of these 36 streamlining and reform initiatives. In July 2013, we reported on DOT’s progress in addressing 10 recommendations made to DOT and FAA by the Future of Aviation Advisory Committee to promote future success of the aviation industry. We found that DOT and FAA officials said they had addressed 7 of the 10 recommendations, but that a majority of advisory committee members believed only 1 recommendation had been addressed. Advisory committee members noted that some recommendations may not have been fully addressed since they were linked to ongoing efforts that DOT had identified. We have also previously reviewed specific initiatives. For example, one Office of Aviation Safety initiative was to establish an Unmanned Aircraft Systems Integration Office, which FAA created in January 2013. In February 2014, we testified that though the office had been officially created and had over 30 full time employees, it lacked an operations budget and had not finalized agreements related to the creation of the office. As of November 2014, FAA officials told us that the office had increased to 43 full time employees and had been allocated operations and facilities and equipment funding. There are a number of key practices that can help agencies successfully carry out organizational transformations and improve the efficiency, effectiveness, and accountability of such efforts. The four selected key practices we used to evaluate FAA’s efforts for each initiative are consistently found at the center of successful transformations. These key practices are described in table 2. We identified these key practices based on our previous work on organizational transformations—both in the public and private sectors—and our work on implementing a results- oriented approach to agency management. We assessed FAA’s efforts for all 36 initiatives against two key practices—establish a communication strategy and adopt leading practices for results-oriented strategic planning and reporting. For the five initiatives that FAA classified as “in- progress,” we assessed FAA’s efforts against two additional key practices—dedicate an implementation team and set implementation goals and a timeline. FAA’s actions to carry out its initiatives were generally consistent with our selected key practices for organizational transformations; however, FAA’s actions were less consistent with the key practice to adopt leading practices for results-oriented reporting, as shown in figure 2. Appendix II provides more detail on the methodology we used to assess FAA’s actions, and appendix III shows the extent to which each initiative was consistent with the selected key practices we identified. FAA’s actions were consistent with the key practice of dedicating an implementation team to manage the transformation process for the five in-progress initiatives we evaluated. Dedicating a strong and stable implementation team that will be responsible for the transformation’s day- to-day management is important for ensuring that it receives the focused, full-time attention needed to be sustained and successful. Initiatives that were consistent with this key practice identified an implementation team or contact, selected experienced team members, and established networks to support the implementation. For example, the Office of Airports’ initiative to standardize its field office structure and balance its field workload was consistent with this key practice. Specifically, two senior Airports officials are leading this initiative, one located in Washington, D.C., and one located in a regional office, and all five of the regions in which FAA is changing the structure and workload formed working groups to develop region-specific implementation plans and schedules. FAA’s actions were consistent with the key practice of setting implementation goals and a timeline for 4 of the 5 in-progress initiatives but were inconsistent with the key practice for the remaining initiative. A transformation is a substantial commitment that could take years before it is completed and therefore must be carefully and closely managed, as we stated in our previous work on organizational transformations. Initiatives that were consistent with this key practice established implementation goals and timelines and developed plans for assessing and mitigating risk. For example, AFN’s initiative to modernize FAA’s records management system was consistent with this key practice. The team managing the initiative used short- and long-term timelines, weekly status reports, and a work schedule to set goals and a timeline for activities. However, Commercial Space Transportation’s initiative to move some inspectors and engineering staff to field office locations was inconsistent with the key practice. Officials leading this initiative did not provide an implementation plan, schedule, or other supporting documentation to demonstrate that they developed implementation goals, timelines, or plans to address risks. FAA’s actions were consistent with the key practice of establishing a communication strategy for 30 of the 36 initiatives and partially consistent for 6 of the 36 initiatives. Creating an effective, on-going communication strategy is essential for executing a transformation, and the organization must develop a comprehensive communication strategy that reaches out to employees and seeks to engage them in the transformation. Initiatives that were consistent with this key practice had officials leading the effort who communicated early and often to build trust, encouraged two-way communication, and provided information to meet the specific needs of employees. For example, AFN’s initiative to consolidate strategic sourcing and related strategic programs into a new office was consistent with this key practice. AFN officials communicated to employees through briefings at the onset of the initiative and used regular newsletters to share information with affected employees during the transition. In addition, employees were able to provide feedback to their leadership through email, a hotline, and survey, as well as on “IdeaHub”—a DOT-administered internal website where employees can propose solutions or ideas regarding existing challenges. For initiatives that were partially consistent with this key practice, FAA demonstrated some but not all implementation steps for the key practice. For example, the Joint Resource Council’s initiative to review FAA’s acquisitions and investment strategy to optimize funding capital investments was only partially consistent with the key practice. Although the Joint Resource Council communicated information on the initiative to employees through internal websites and informational meetings, based on documents we reviewed, there was limited two-way communication to elicit feedback from employees on implementing this initiative. FAA’s actions were consistent with the key practice of adopting leading practices for results-oriented strategic planning and reporting for 21 of the 36 initiatives, partially consistent for 12 of the 36 initiatives, and inconsistent for 3 of the 36 initiatives. Initiatives that were consistent with this key practice established a basis for comparing results and used or were planning to use performance measures to assess results. Performance measures should show progress toward achieving an intended level of performance or results. Additionally, meaningful performance measures should be limited to a vital few and cover multiple government priorities such as quality, timeliness, cost of service, and other results. For many initiatives, FAA’s actions were consistent with this key practice. For instance, for the Office of Aviation Safety’s initiative to close its London international field office, FAA reported a cost savings of $2.5 million through fiscal year 2015. FAA calculated cost savings due to the office closure and transfer of responsibilities to the Frankfurt international field office from, among other things, reduced staffing, savings in office rent, and savings in rent payments for personnel in Frankfurt. Another initiative for which FAA’s actions were consistent with this key practice was AFN’s initiative to centralize FAA’s acquisition functions and identify areas for process improvements to more efficiently distribute work and standardize processes. For this initiative, AFN officials had tracked a number of acquisition-related metrics prior to the consolidation of acquisition functions, which allowed AFN officials to examine trends in these metrics following the consolidation. For example, one metric FAA tracked was the number of certified contracting staff, which aligns with the initiative’s expected benefits to standardize processes and to offer expanded career paths for contracting professionals. Between September 2012 and September 2013, during which AFN said it completed this consolidation, the number of certified contracting staff increased from 143 to 191. In addition to quantifiable metrics such as this, AFN also tracked qualitative measures for expected benefits, such as clarifying authorities and responsibilities and sharing best practices and lessons learned. However, not all initiatives were fully consistent with the implementation steps for the key practice. For example, the Office of Airports’ initiative to develop standard operating procedures to standardize its regional processes, such as grant reviews, was partially consistent with the key practice. Airports officials stated that performance measures had not been developed to assess the expected benefit of this initiative—to gain necessary efficiencies. Officials further stated that no baseline information exists, which would allow for a valid comparison of any change in overall efficiency. Officials stated that they intend to develop performance measures of efficiency for the initiative once they have developed and implemented all standard operating procedures. Until those performance measures are developed, officials said they will only measure the degree to which Airports employees use the new standard operating procedures. In addition, three initiatives were inconsistent with the implementation steps for the key practice. For example, the Office of NextGen’s initiative to incorporate process improvements—termed “Ideas 2 In-Service” (I2I)— into its Acquisitions Management System was inconsistent with the key practice. FAA stated in its Section 812 report that this initiative would increase accountability and enable FAA to streamline the management of NextGen programs and activities through a single entry point for ideas to change the National Airspace System. When discussing the initiative to incorporate I2I into the Acquisitions Management System, NextGen officials stated that no performance measures currently exist to assess whether the Office of NextGen has achieved increased accountability or streamlining, nor is there a plan to develop measures to assess the performance of the initiative now or in the future. AFN has not effectively encouraged or coordinated performance measurement across the offices leading the streamlining and reform initiatives. As a result, FAA and Congress may have limited information on the extent to which FAA achieved the intended benefits outlined in the Section 812 mandate. As stated previously, FAA used a decentralized approach to respond to the Section 812 mandate. According to FAA officials, offices leading the initiatives were responsible for identifying initiatives and associated expected benefits. Although AFN encouraged offices to describe expected benefits and specific metrics when initially collecting information in April 2012 for FAA’s report to Congress, AFN did not explicitly communicate that offices should measure and track performance as initiatives were carried out and completed. Further, AFN provided limited guidance and oversight to offices on how to determine expected benefits, establish performance measures, and then track whether they were achieved. As a result of this limited coordination on measuring results, FAA offices reported varied types of expected benefits across the 36 initiatives. Specifically, offices identified a range of quantifiable and qualitative expected benefits and reported the same types of benefits for few initiatives, even when initiatives had similar goals. For example, AFN reported a quantifiable benefit—cost savings—for an initiative that consolidated strategic sourcing and other strategic initiatives into a new organization. However, for a similar Air Traffic Organization initiative that consolidated oversight for major system acquisitions into a new office, FAA reported qualitative benefits, including a stronger acquisitions community and defined program-management career paths. Given the range of expected benefits, performance measures for FAA’s offices also vary across the 36 initiatives. Varied performance measures may allow FAA to better capture the unique benefits for individual initiatives, such as fleet petroleum reduction. However, a limited focus by AFN on communicating the importance of measuring specific results, including those such as cost savings that may be applicable to multiple efficiency initiatives, hinders FAA from tracking and reporting on the overall benefit of the Section 812 effort. Offices are using or plan to use a variety of performance measures, including quantifiable and qualitative measures, to measure the different expected benefits, according to FAA officials. For example, the Policy, International Affairs, and Environment office’s initiative to facilitate an agency-wide sustainability program is tracking nine quantifiable performance measures, including water efficiency and alternative fuel use, on a quarterly basis against baseline information. In another instance, AFN’s initiative to streamline and improve its executive-level committees used qualitative measures to assess the effectiveness of changes to executive committees. Specifically, officials surveyed executives before and after changes were made to committees to determine the extent to which the initiative achieved its expected benefits, such as improved cross-organizational decision-making. Further, according to one of the current points of contact for the initiative to create a project management office to consolidate oversight of major air traffic organization acquisitions, one measure of the benefits from this initiative is that the organization has been able to support an increasing number of programs and stakeholders without increasing its workforce. We have found in past work that FAA could improve its efforts to measure the performance of large-scale program implementation efforts, improvement initiatives, and certain oversight programs, and FAA is taking actions to address our related recommendations. For example, we found in September 2012 that FAA did not have performance measures to assess whether its new safety management system approach was improving safety, and we recommended that FAA identify and collect data on performance measures to assess whether the new approach meets its goals and objectives. FAA expects to have tools and processes in place to evaluate the safety management system’s performance by April 2015. In addition, in July 2014, we found that FAA did not develop performance metrics to measure the individual or collective outcomes of a number of its aviation certification and approval process-improvement initiatives, and we recommended that it develop and track measurable performance goals. We initially identified the need for performance measures in this area in 2010 and recommended that FAA develop and track measurable performance goals. FAA officials responded that they plan to develop these measures over time in three phases and will specifically develop measures to evaluate each initiative’s outcomes. Performance information is needed for federal programs and activities to help inform decisions about how to address fragmentation, overlap, or duplication and is critical for achieving results and maximizing the return on federal funds, as we found in April 2014. In our previous work, we have found that federal agencies engaging in large projects, such as those FAA is currently undertaking, should establish activities to monitor performance measures and compare actual performance to expected benefits throughout the organization. Moreover, for federal agency consolidation efforts, we have found that agencies should have implementation plans that include measures that show an organization’s progress toward achieving an intended level of performance, such as quality, timeliness, cost of service, or customer service that the consolidation was intended to achieve. By not further coordinating with FAA offices on the use of objective and balanced measures of efficiency and other improvements across initiatives, AFN and FAA overall are missing an opportunity to more consistently assess and aggregate the benefits from FAA’s streamlining and reform initiatives. To help produce an objective assessment of benefits, performance measures should typically include a quantifiable, measurable value to the greatest extent possible. Quantifiable measures can allow for a more useful assessment of benefits as these measures apply numerical targets or other measurable values to such benefits, providing a more objective comparison of benefits across initiatives and time periods. For example, for an initiative to centralize acquisition functions into a new office, FAA officials measured the percentage of employees with contract specialist certifications, a metric that demonstrated the organization’s progress towards developing a qualified workforce. Several measures FAA officials identified, such as improved communication, do not explicitly allow FAA to measure efficiency or other outcomes. Further, overemphasizing certain aspects of performance, such as improving timeliness, could result in deterioration in other aspects of performance, such as quality. By developing a balanced suite of measures, agencies can better ensure that they cover their various priorities while maintaining quality. Without a more coordinated effort to encourage offices to track performance measures that can be aggregated across multiple initiatives, FAA and Congress as well as other stakeholders cannot have confidence that the agency’s efforts met or will meet the intent of Section 812 to streamline and reform the agency. AFN recognized this need in its description of one initiative to develop agreements to define the services to be offered by AFN to other FAA offices, such as information technology and acquisition functions; specifically, AFN stated that the absence of common performance metrics for these functions makes it difficult to determine the success or failure of efforts undertaken to consolidate these services and thus increase operational efficiency. FAA, through its Community of Practice for Process Improvement, is creating a database to track information on its process improvement efforts. FAA officials said this database will initially contain information only for the Section 812 initiatives but will eventually become a broader database on process improvement activities across the agency. According to FAA officials, they have not yet decided on the full range of information that the database will capture. FAA officials said that the database will initially contain, for each of the initiatives, only the four items in the 812 report—problem statement, proposed solution, expected benefits, and status—though the content could be expanded beyond these items. Further, FAA has not determined whether information in the database will ultimately provide a basis for measuring of the overall or net benefit of the Section 812 response. Lastly, Section 812 required FAA to submit a report to Congress on the actions taken to streamline and reform the agency but did not require that FAA track or report to Congress on the results of these actions. As noted above, performance information for federal programs and activities is critical for achieving results and maximizing the return on federal funds. If Congress directs FAA to undertake a similar review to streamline and reform the agency in the next authorization of FAA, Congress could help ensure that FAA provides information on any realized efficiencies and improvements by requiring tracking and reporting. Without such information on results, Congress may have difficulty fully monitoring FAA’s efforts to make the agency more efficient and effective. Moreover, such a requirement would better position FAA to take steps to assess the overall results of its efforts. For example, FAA and its offices, if asked to undertake and report on the results of a streamlining and reform review in the next authorization for FAA, could take necessary steps, such as collecting baseline information, to establish performance measures and a basis for comparing the results in line with key practices for organizational transformations. FAA responded to Section 812 of the FAA Modernization and Reform Act by working with each of its offices to identify and carry out initiatives to streamline and reform the agency. AFN, which led the agency’s response to the Section 812 mandate, collected and reported information on the 36 streamlining and reform initiatives but provided limited guidance on measuring performance and expectations to offices leading the initiatives. As a result, the offices leading the initiatives determined the status of the initiatives in different ways. Moreover, offices identified a wide variety of expected benefits across the initiatives and, where in place, used a wide variety of performance measures to gauge whether benefits were achieved for an initiative. Given the diverse nature of the 36 initiatives, some variation in how offices determined status and measured benefits is expected. However, FAA’s decentralized process to identify and track the initiatives intensified the variation and makes it difficult to discern FAA’s progress in making reforms and measure the overall impact of the initiatives. Without better performance measures, FAA lacks information to help it improve the performance of the initiatives and make decisions on issues targeted by the mandate, such as duplication and overlap. In addition, FAA has a limited ability to hold initiative leaders accountable for fully implementing the initiatives and for achieving planned benefits. Further, Congress and FAA will not know the extent to which the agency’s efforts met the aims of Section 812—including making the agency more efficient—without the use of some common performance measures that FAA can use to more easily aggregate benefits and assess results across multiple initiatives. While AFN provided limited guidance that likely contributed to the lack of common, consistent performance measures across initiatives, FAA has already responded to the Section 812 mandate. However, as many of the initiatives involve continuous action to realize benefits, collecting information on the results of these initiatives through the planned database can help FAA aggregate and report the results of the Section 812 initiatives. Moreover, by creating a mechanism to collect and manage such information in its planned database, FAA will be better positioned to measure the results of any future improvement and efficiency initiatives. Further, key practices for organizational transformations and GAO’s work on streamlining government highlight the importance of using performance measures to show progress toward achieving desired results and outcomes. If Congress were to require FAA to report on actual results of a future streamlining and reform mandate, it would obtain information to judge whether FAA’s efforts met Congress’s intent and produced actual benefits and assist Congress with its oversight of the agency. By setting such a requirement and expectation, coupled with action by FAA to collect performance information in its planned database for ongoing and future improvement initiatives, Congress would enable FAA to better focus on measuring the results of any future mandated streamlining and reform efforts. If, in the next authorization for FAA, Congress chooses to mandate that FAA take actions to streamline and reform the agency, Congress may wish to consider requiring FAA to (1) track measures of and (2) report to Congress on the actual results of such efforts. To better enable FAA to track, aggregate, and report on the results of its streamlining and reform initiatives, we recommend that the Secretary of Transportation direct FAA to develop a mechanism to capture the results of its efficiency initiatives in its planned database for process improvements. Measures of results might include, for example, cost savings, timeliness, or customer service metrics, which may be common to several types of process improvement efforts and therefore facilitate aggregation across improvements. We provided a draft of this report to DOT for review and comment. In its written comments, reproduced in appendix IV, DOT concurred with the recommendation. DOT also provided technical comments, which were incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Transportation and the appropriate congressional committees. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Table 3 provides a list of the Federal Aviation Administration’s (FAA) 36 streamlining and reform initiatives that the agency identified in response to Section 812 of the FAA Modernization and Reform Act of 2012. As of January 2015, FAA considered 3 initiatives in-progress (shown in italics in table 3) and 33 initiatives implemented. The table provides information on the benchmark FAA used to determine that an initiative was “implemented” for the purposes of Section 812. The table also summarizes FAA officials’ descriptions of the continuous action needed, if any, to realize expected benefits for each initiative, even after it is considered “implemented.” We categorized the continuous actions into the four following groups: primary actions that are directly related to realizing the initiative; secondary or related actions that enhance or expand the initiative, monitoring actions that are conducted in order to maintain an initiative, and none. This report examines FAA’s actions to respond to the Section 812 mandate and efforts to implement 36 initiatives to streamline and reform the agency. FAA identified these 36 initiatives in response to Section 812 of the FAA Modernization and Reform Act of 2012. In particular, this report provides information on (1) how FAA determined the status of the streamlining and reform initiatives that the agency reported on in response to Section 812 and (2) the extent to which FAA’s efforts to carry out these initiatives were consistent with selected key practices for organizational transformations. To describe how FAA determined the status of the 36 recommendations, we examined FAA’s Section 812 report to Congress and other agency documents related to the streamlining and reform initiatives. We reviewed FAA officials’ descriptions of the continuous action being taken for each initiative to realize expected benefits, even after an initiative is considered “implemented,” and we categorized the action in the following four groups: primary actions that are directly related to realizing the initiative, secondary or related actions that enhance or expand the initiative, monitoring actions that are conducted in order to maintain an initiative, and none/no planned continuous action. According to FAA, the agency identified the 36 initiatives by documenting both ongoing and newly identified improvements that were in line with the Section 812 language. We did not assess the appropriateness of the initiatives FAA identified. In addition, we reviewed prior GAO reports on the Department of Transportation’s (DOT) and FAA’s actions to implement a set of recommendations and reports specific to topics covered by the 36 initiatives. We also interviewed FAA officials responsible for individual initiatives as well as for coordinating the agency’s Section 812 efforts to discuss the status of each initiative and any future plans or remaining actions. To examine the extent to which FAA’s efforts to carry out the 36 initiatives were consistent with selected key practices, we identified key practices applicable to the FAA initiatives cited in prior GAO work on organizational transformations. We searched past GAO publications for reports on project management or implementation and discussed possible sources for criteria with internal stakeholders. We identified several relevant reports and sources that examined organizational transformations and streamlining government, in particular, efficiency initiatives and proposals to consolidate infrastructure and management functions. Most of these reports drew on or included the key practices for mergers and organizational transformations outlined in our 2003 report. We selected the key practices for organizational transformations as criteria against which to assess FAA’s efforts since Section 812 directs FAA to review the agency and take necessary actions to reform the agency and since the key practices were used in past work to examine government streamlining efforts like efficiency initiatives and consolidations. However, since we reviewed 36 initiatives led by offices within FAA rather than a single, agency-wide initiative, we found that not all the key practices for organizational transformations were relevant. Given the scope and status of the initiatives, we identified four key practices that were applicable. We determined that all four of these key practices were applicable for evaluating the initiatives that FAA reported as “in-progress,” and two of them—establish a communication strategy and adopt leading practices for results-oriented strategic planning and reporting—were applicable for evaluating the initiatives that FAA reported as “implemented.” Table 4 lists the selected key practices, with the relevant implementation steps for each key practice. We systematically assessed the extent to which FAA’s efforts to carry out an initiative were consistent with the key practices. For each initiative, we (1) reviewed FAA documents—schedules, communications, and other planning documents—and (2) conducted semi-structured interviews with the FAA point(s) of contact. We developed a template to help us consistently analyze the collected information related to the implementation steps for each key practice (see table 4). For one key practice—adopt leading practices for results-oriented strategic planning and reporting—we supplemented the implementation step with requirements for performance plans outlined in An Evaluator’s Guide to Assessing Agency Annual Performance Plans. This guide, which was based in part on requirements for agency performance plans from the Government Performance and Results Act of 1993 (GPRA), identifies key issues and criteria to assess performance plans. Specifically, we used criteria from the guide on defining expected performance that aligned with the key practice and our past work on government streamlining. We determined whether FAA’s efforts to carry out each initiative were consistent, partially consistent, or inconsistent with each applicable key practice, or whether there was not enough information to make an assessment. We used the following general decision rules to make our assessment: partially consistent, if FAA had shown some progress toward consistent, if FAA had instituted the practice; instituting, or started but not completed the practice; inconsistent, if FAA had made minimal or no progress toward instituting the practice; and not enough information to tell, if, for example, the initiative was implemented several years ago, and/or documentation or testimonial evidence does not exist. We used the FAA documentation and interviews for two initiatives to do trial assessments using a draft version of the template; we then made revisions to the template to clarify the information to collect and decision rules before carrying out assessments for all initiatives. Two analysts reviewed the documentation to make an assessment for each initiative. After we completed our assessments for all 36 initiatives, we identified themes or trends in FAA’s implementation across all initiatives, including the extent to which FAA’s efforts were consistent with the key practices. While we reviewed FAA’s measures or plans to measure the benefits of individual initiatives as part of our effort to assess FAA’s efforts to adopt leading practices for results-oriented strategic planning and reporting, we did not validate FAA’s estimates of benefits against independent measures. Thus, we did not report on the actual or achieved benefits of the initiatives. In addition, we interviewed FAA officials responsible for coordinating the agency’s Section 812 efforts to discuss the guidance given to offices and points of contact implementing individual initiatives and the information collected and tracked for all the streamlining and reforming initiatives. Table 5 provides a list of FAA’s 36 streamlining and reform initiatives, by FAA office, which the agency identified in response to Section 812 of the FAA Modernization and Reform Act of 2012. The table also provides our assessment of the extent to which FAA’s efforts to implement each initiative were consistent with four selected key practices for organizational transformations. Appendix II contains information on the scope and methodology of this analysis. Gerald L. Dillingham, Ph.D., (202) 512-2834 or [email protected]. In addition to the contact person named above, Catherine Colwell, Assistant Director; Melissa Bodeau; Elizabeth Curda; Kevin Egan; Aracely Galvan; Dave Hinchman; Bert Japikse; Heather Krause; Brandon Kruse; Joanie Lofgren; SaraAnn Moessbauer; Josh Ormond; Sarah E. Veale; and William T. Woods made key contributions to this report. Aviation Manufacturing: Status of FAA’s Efforts to Improve Certification and Regulatory Consistency. GAO-14-829T. Washington, D.C.: July 31, 2014. FAA Reauthorization Act: Progress and Challenges Implementing Various Provisions of the 2012 Act. GAO-14-285T. Washington, D.C.: February 5, 2014. Managing For Results: Executive Branch Should More Fully Implement the GPRA Modernization Act to Address Pressing Governance Challenges. GAO-13-518. Washington, D.C.: June 26, 2013. Strategic Sourcing: Leading Commercial Practices Can Help Federal Agencies Increase Savings When Acquiring Services. GAO-13-417. Washington, D.C.: April 15, 2013. NextGen Air Transportation System: FAA Has Made Some Progress in Midterm Implementation, but Ongoing Challenges Limit Expected Benefits. GAO-13-264. Washington, D.C.: April 8, 2013. Acquisition Workforce: DOT Lacks Data, Oversight, and Strategic Focus Needed to Address Significant Workforce Challenges. GAO-13-117. Washington, D.C.: January 23, 2013. Unmanned Aircraft Systems: Measuring Progress and Addressing Potential Privacy Concerns Would Facilitate Integration into the National Airspace System. GAO-12-981. Washington, D.C.: September 14, 2012. Streamlining Government: Questions to Consider When Evaluating Proposals to Consolidate Physical Infrastructure and Management Functions. GAO-12-542. Washington, D.C.: May 23, 2012. Streamlining Government: Key Practices from Select Efficiency Initiatives Should Be Shared Governmentwide. GAO-11-908. Washington, D.C.: September 30, 2011. NextGen Air Transportation System: FAA’s Metrics Can Be Used to Report on Status of Individual Programs, but Not of Overall NextGen Implementation or Outcomes. GAO-10-629. Washington, D.C.: July 27, 2010. Air Traffic Control: FAA Reports Progress in System Acquisitions, but Changes in Performance Measurement Could Improve Usefulness of Information. GAO-08-42. Washington, D.C.: December 18, 2007. Results-Oriented Cultures: Implementation Steps to Assist Mergers and Organizational Transformations. GAO-03-669. Washington, D.C.: July 2, 2003.
As fiscal pressures facing the federal government continue, so too does the need for federal agencies to improve the efficiency and effectiveness of programs and activities. Section 812 of the FAA Modernization and Reform Act of 2012 mandated that FAA review its programs, offices, and organizations to, among other things, identify and address inefficient processes, wasteful practices, and duplication. In response, FAA identified 36 initiatives, including centralizing administrative functions and modernizing records management. GAO was asked to examine FAA's progress to streamline and reform the agency as Congress considers reauthorizing FAA in fiscal year 2015. GAO examined how FAA determined the status of initiatives and the extent to which its efforts to implement initiatives were consistent with selected key practices for organizational transformations. Since each initiative sought to streamline or reform FAA, GAO identified four key practices for organizational transformations as applicable to these initiatives. GAO assessed FAA's efforts by comparing FAA documents to the selected key practices and interviewing agency officials leading each initiative. The Federal Aviation Administration (FAA) used a decentralized process to track the status of streamlining and reform initiatives identified in response to the Section 812 mandate in the FAA Modernization and Reform Act of 2012. FAA's actions to implement the initiatives were mostly consistent with three key practices for organizational transformations but were less consistent with the key practice of adopting leading practices for results-oriented reporting, which includes using performance measures to show progress toward achieving results. Without information on the results of the initiatives, FAA and Congress cannot have confidence that FAA's efforts streamlined and reformed the agency. Decentralized process: The Office of Finance and Management (AFN)—which led FAA's response to the Section 812 mandate—used a decentralized process to track initiatives. Individual offices responsible for the initiatives determined their status using varied definitions for “implemented.” For example, FAA considered an initiative to centralize leadership training “implemented” after officials created a plan for developing a series of courses, while FAA will consider an ongoing initiative to create standard procedures for the Office of Airports “implemented” after officials develop and deploy 24 new, standard procedures. As of January 2015, FAA considered 33 of the 36 initiatives implemented. FAA's actions generally consistent with three key practices: GAO found that FAA's actions to implement the initiatives were mostly consistent for three key practices for organizational transformations—dedicate an implementation team, set implementation goals and a timeline, and establish a communication strategy. For example, FAA's actions were consistent with establishing a communication strategy for 30 of 36 initiatives and partially consistent for 6 of 36 initiatives. FAA's actions less consistent with key practice regarding results-oriented reporting: GAO found that FAA's actions were inconsistent with this key practice for 3 of 36 initiatives, partially consistent for 12 of the 36, and consistent for 21 of 36. For example, for an initiative that was partially consistent, officials said that until they develop performance measures for the effect of the initiative, they would measure only whether staff use the new procedures. FAA's limited efforts to measure performance or outcomes of the initiatives hinder its ability to assess the initiatives' results. AFN has neither required offices to track performance measures nor made a specific effort to track any common measures across initiatives. As a result, offices used a range of performance measures to report results. GAO has previously found that information on results is critical for improving program performance and that agencies should have measures for the intended results of streamlining efforts—like cost savings and customer service—to help decision makers improve program performance. Actions to implement most of the 36 initiatives are continuing, and FAA plans to create a database to track these initiatives. Moving forward, FAA also plans to use the database to track other process improvement activities. To date, FAA has not decided what information to capture in the database but initially plans to include only descriptive information on each initiative. Lastly, Section 812 did not require FAA to track or report to Congress on the initiatives' results. By requiring such tracking and reporting, Congress could help ensure that FAA provides information on the results of a reform mandate, if required of FAA in the next authorization. As Congress considers FAA reauthorization, GAO suggests that Congress consider requiring FAA to track and report on the actual results of future agency-reform efforts. GAO recommends that FAA take steps to capture the results of improvement initiatives in its planned database for process improvements. The Department of Transportation agreed with the recommendation.
In 2007, the United States produced an average of 8.5 million barrels of petroleum per day, or about 10 percent of the global average production of 84.4 million barrels per day. As a percentage of total world consumption, the United States was the largest consumer of crude oil and petroleum products in 2007, with an average consumption of 20.7 million barrels per day. According to EIA statistics, imports provide the United States with about 60 percent of its overall petroleum needs. Of the petroleum refined in the United States, approximately 46 percent is used for gasoline, primarily for use in the transportation sector. Second to gasoline, distillate fuel oil (including diesel)––which is used for a variety of heating, energy, and transportation purposes––accounts for 21 percent of petroleum refined in the United States, followed by kerosene-type jet fuel at 9 percent. The remaining 24 percent of crude is used to make other products, such as heavy fuel oil or asphalt. Firms operating in the petroleum industry range widely, from large corporations that operate in multiple countries and across various segments of the industry, to small firms that operate exclusively in the United States or in only one segment of the industry. Companies operating in the upstream segment––which includes the exploration and production of crude oil––include fully vertically integrated companies as well as independent producers. Fully vertically integrated companies are generally large, multibillion-dollar publicly traded companies, such as Exxon Mobil. By contrast, independent producers range from extremely small, privately owned operations to multibillion-dollar publicly traded companies, such as Occidental. Companies operating in the midstream segment––which includes the transport of crude oil and refined petroleum products––include firms that manage pipelines, marine tankers and barges, railways, and trucks. Midstream companies also range widely in size and can include large, vertically integrated companies as well as smaller independent operators of pipelines or other modes of transportation. Pipelines are the most common, and considered the most efficient, mode of transporting crude oil and petroleum products in the United States from production points to refineries and from refineries to storage terminals. Nationwide, there are about 200,000 miles of pipeline across all 50 States, through which approximately 66 percent of petroleum products are transported. Companies operating in the downstream segment include firms that refine crude oil as well as firms that market refined petroleum products. Refining involves the transformation of crude oil into the various petroleum products, such as gasoline, distillate fuel oil, and jet fuel, as well as heavier products, such as asphalt. According to data from EIA, as of January 1, 2008, there were 150 operable refineries in the United States. In 2002, about 60 firms, including large, fully vertically integrated companies and independent firms, owned these refineries. For example, as of January 2007, ConocoPhillips owned 12 U.S. refineries and 19 refineries worldwide. Petroleum marketing involves purchasing refined petroleum products from refiners and selling them to wholesaler and retail firms. There are different classes of wholesale gasoline purchasers in the United States, and the prices they pay depend, in part, on the type of relationship they have with the refiners. Given the nation’s dependence on gasoline and other petroleum products, competition among petroleum industry firms has long been considered of paramount importance to the economy. In 1890, Congress passed the Sherman Act to counter anticompetitive practices in several industries, including some of Standard Oil’s practices in the petroleum industry. In 1914, Congress expanded its antitrust authority by creating FTC and enacting the Clayton Act. As such, merger activity in all three segments of the industry and the potential for anticompetitive behavior through industry consolidation has long been the subject of interest on the part of many industry observers and government regulators. FTC is the federal antitrust agency that is responsible for reviewing proposed mergers in the petroleum industry, with the goal of maintaining industry competition. FTC reviews mergers of firms in the petroleum industry if their operations are likely to impact U.S. markets, and the agency enforces various antitrust laws. Although FTC says that it scrutinizes mergers in the petroleum industry more than any other industry, FTC’s statutory authority to review proposed mergers in the petroleum industry is the same as in other industries. FTC has enforcement and administrative responsibilities from over 60 laws, but uses 3 statutes to guide its review of all proposed mergers––the Clayton Act, the Federal Trade Commission Act, and the Hart-Scott-Rodino Act–– as outlined in table 1. While the three statues help direct FTC’s review of proposed mergers in all industries, Hart-Scott-Rodino provides the framework for the premerger review. Hart-Scott-Rodino requires all persons contemplating a merger valued at $50 million or more and meeting certain other conditions to formally notify FTC and DOJ. The act imposes a 15-day waiting period for cash tender offers and a 30-day waiting period for most other transactions to allow FTC and DOJ to review the proposed merger in an effort to predict its potential effect on competition. If the initial review does not indicate a need for further investigation, the merger can be completed. To ease compliance with Hart-Scott-Rodino, FTC and DOJ established a premerger notification program in 1978 that set a systematic process for FTC to follow in reviewing all proposed mergers and allows the agencies to avoid the difficulties and expense of challenging mergers that harm competition after they are completed. This gives them the ability to challenge proposed mergers before they are completed when remedial action would be most effective, if warranted. See figure 1 below for a summary of FTC’s merger review procedures. FTC staff and DOJ officials told us that they divided their merger review portfolio, and that FTC handles all of the petroleum industry merger review cases because it has more expertise in that area. FTC’s merger review process is conducted by staff in various bureaus and offices throughout the agency, but mainly by the Bureau of Economics and the Bureau of Competition. The agency also has a Merger Screening Committee composed of at least the Director of the Bureau of Competition, section heads of that bureau’s divisions, representatives from the Bureau of Economics, and other relevant FTC staff. The purpose of the group is to determine whether to recommend that the Chairman approve and issue a request for additional information and to decide other policy matters. markets. For example, FTC staff told us that they generally consider crude producers to compete globally, refiners to compete regionally, and wholesale gasoline suppliers to compete at a more local level. FTC and DOJ merger guidelines define three broad categories of market concentration as measured by HHI: an unconcentrated market has an HHI of less than 1,000; a moderately concentrated market has an HHI between 1,000 and 1,800; and a highly concentrated market has an HHI over 1,800. More than 1,000 U.S. mergers occurred in the petroleum industry between 2000 and 2007. The largest number and greatest value mergers occurred in the upstream segment, primarily due to increasingly challenging conditions for oil exploration, while midstream and downstream mergers were primarily driven by the desire to improve efficiencies and reduce costs. We also found in our analysis of the upstream crude oil production segment of the industry and the downstream refining and wholesale gasoline supply segments of the industry that, in most regions, petroleum industry market segments were moderately concentrated. Lacking data on midstream, we were not able to determine concentration in this segment of the industry. Between January 2000 and May 2007, 1,088 U.S. mergers occurred in the petroleum industry. The number of mergers that occurred each year during this period generally increased over the period, from 124 mergers in 2000 to 167 in 2006, as shown figure 2. About 75 percent of these mergers were asset mergers, or mergers where one firm purchases only a portion of another firm’s assets, such as Tesoro’s purchase of 140 retail gasoline stations in California from USA Petroleum in early 2007. The remaining 25 percent were corporate mergers, or mergers where one firm generally acquires all of another firm’s stock and assets such that the two firms become one firm. For example, in 2002, Phillips Petroleum acquired all of Conoco’s stock, creating the new firm ConocoPhillips. Reported transaction values for U.S. petroleum mergers during this period ranged widely, from $10 million to over $10 billion. As shown in figure 3, the greatest number of mergers during this period were valued between $10 million and $49 million, and between $100 million and $499 million, accounting for 39 percent and 29 percent of merger activity, respectively. Overall, 61 percent of mergers were valued at more than $50 million, which is the threshold above which merging firms are required to notify FTC so that it can review them for potential anticompetitive effects. The average value for mergers during this period was $497 million, while the median value for mergers during this period was $72 million. Corporate mergers comprised the top 11 most valuable mergers, including 6 mergers valued at over $10 billion each. The largest merger was the 2001 corporate merger of Chevron and Texaco; it was valued at $45 billion. This merger and the other 5 corporate mergers that were valued at over $10 billion during this period are highlighted in table 2. The upstream segment of the industry––comprised of oil exploration and production endeavors—accounted for approximately 69 percent of the 1,088 mergers. The midstream segment of the industry––mainly comprised of firms that operate pipelines and other infrastructure used to transport oil and gas––accounted for about 13 percent. The downstream segment of the industry––comprised of firms that refine crude oil and market petroleum products––accounted for 18 percent. Figure 4 highlights this distribution across the segments. In the U.S. upstream petroleum segment, some trends were similar to those that we previously discussed for the industry overall, with the number of mergers over the period generally rising and asset mergers comprising approximately 75 percent of all mergers. Upstream mergers had the highest transaction values of the three segments, accounting for the six most valuable mergers highlighted in table 2 that exceeded $10 billion in value. Overall, the average value for upstream mergers was $539 million, while the median value was $67 million. A key reported driver of U.S. mergers in the upstream segment was the increasing challenge associated with exploring and producing oil in extreme physical environments. Industry officials at oil companies reported that reserves that can be easily and economically produced are declining, and that remaining exploration opportunities are increasingly located in physically extreme environments, making the development of new petroleum resources more costly and technologically challenging. Extreme physical environments, such as offshore oil reserves in deep water, require costly capital investments in specialized drills, pipes, and platforms equipped to operate in deep marine environments; operating costs in these environments can be 3.0 to 4.5 times higher than costs for n typical shallow water rigs. In addition, extreme physical environments ca include “nonconventional” oil reserves, such as oil sands, that require the use of additional and expensive technologies—including additional mining and heating—to produce crude oil. Academics and industry officials and reported that mergers better position oil companies to acquire capital achieve the organizational efficiencies that help enable successful exploration and production in these environments. Another reported driver of U.S. mergers in the upstream segment was the increasing challenge associated with reliably accessing oil reserves worldwide. As national oil companies increasingly expand their exploration efforts and contend for access to reserves in third-pa rty countries, researchers and industry representatives reported that nat firms, operating on behalf of their home country, often have access to more capital, have fewer financial constraints, and have more bargainin power via political influence. In light of these reported negotiating advantages, companies reported that being large provides them wit capital and influence with which to directly compete with the national oil companies. Representatives from oil companies also reported concerns about political uncertainties in regions where key oil reserves are located , because more than 60 percent of world oil reserves are in countries where relatively unstable political conditions could constrain oil exploration and production. For example, in 2007, ConocoPhillips abandoned a multibillion-dollar investment in Venezuela, after a breakdown in negotiations with the government and the national oil company, P resulting in a $4.5 billion loss for the firm. In light of these concerns, academic and industry representatives reported that large firms are b positioned to diversify their exploration interests across multiple countries or regions, thereby lessening the risk their interests face one country. in any Despite these rationales, it is uncertain whether mergers have yielded the desired results in the upstream segment. One group of academic researchers reported that large, international companies have not generally expanded their exploration efforts, since exploration spending by these companies has not increased above premerger levels and some have been unable to replace their reserve assets in recent years. These researchers noted in a report on oil companies that this may be a result of the decline in the number of accessible large oil fields that afford big companies a comparative advantage, due to the increased presence of national oil companies and the increasing restrictions on some oil assets worldwide. The report noted that smaller production companies have been able to replace their existing reserves in recent years, suggesting that large companies are not necessarily better positioned for increased exploration in the current market. Furthermore, according to industry publications, private capital is increasingly available, thereby challenging the notion that firms must be large to have access to capital for expensive exploration projects. As a result of these concerns, industry and academic experts noted that smaller participants in the upstream segment remain an effective and competitive force in developing new projects, raising questions about the viability of large oil mergers in the future. Given that the upstream market is a global market, we also briefly examined global upstream mergers from January 2000 through May 2007. Worldwide, there were 1,722 mergers in the upstream segment during this period, the geographic distribution of which is highlighted in figure 5. As shown in the figure, U.S. mergers comprised about 41 percent of total global merger activity in the upstream segment. Second to the United States, Canada had the highest number of upstream mergers, at 31 percent of total upstream merger activity. Taken together, this evidence highlights that upstream merger activity during this period was heavily concentrated in North America. According to industry reports and academic researchers, recent high levels of merger activity in Canada have been driven by strong growth in the production of crude oil from oil sands, previously considered too technically complicated and expensive, but of growing interest to oil companies given the high price of oil. This activity was also driven out of concern for reliable access to oil, since Canada is considered more politically stable than many other regions of the world with oil reserves. In the U.S. midstream petroleum segment, the number of asset mergers was slightly higher than for the industry overall, accounting for 81 percent of total U.S. midstream merger activity. Over the period, the number of midstream mergers varied somewhat, from a low in 2000 of 6 mergers, to a high in 2005 of 26 mergers (see fig. 6). The top reported transaction values for midstream mergers were the lowest of the three segments, with the most valuable midstream merger totaling $2.8 billion, and a total of eight midstream mergers that exceeded $1.0 billion (see table 3). Overall, the average midstream merger was valued at $252 million, while the median value was $92 million. Looking at the subsegment level, merger activity was split fairly evenly across the pipelines and tankers/other transportation subsegments, with pipelines accounting for 47 percent of mergers and tankers/other transportation accounting for 53 percent. In the midstream segment, industry representatives reported that U.S. mergers have been driven in part by the desire to improve the overall financial performance of midstream operators. According to one industry report, developments in recent years have prompted a renewed focus on risk mitigation and portfolio management in the midstream segment, thereby prompting pipeline and other midstream operators to pursue merger activity. The industry report also noted that midstream merger activity has been further encouraged by the increased involvement of investment banks and the availability of private equity in such endeavors. Furthermore, a government report noted that reduced domestic production of oil has created excess capacity for many U.S. pipelines, which, according to one firm, has prompted pipeline operators to pursue mergers as a means to remain economically viable. In the U.S. downstream petroleum segment, trends generally followed those for mergers overall, with asset mergers, comprising approximately 73 percent of all downstream U.S. mergers and the annual number of mergers rising from 27 to 32 mergers from 2000 to 2006. Top transaction values for the downstream segment fell between those for the upstream and midstream segments, with the largest downstream merger valued at $9.8 billion, for the Phillips Petroleum Company and the Tosco Corp. As shown in table 4, the top 6 downstream mergers each totaled over $5 billion in transaction value. Looking at downstream mergers by subsegment, the terminals/storage subsegment drove the most merger activity, totaling 37.5 percent of mergers during this period (see fig. 7). Second to terminals/storage, the refining subsegment totaled 21.5 percent of all the downstream mergers that we examined, followed by mergers in the gasoline service stations subsegment at 16.0 percent. In the downstream segment, industry officials reported that key drivers of U.S. mergers included a need to increase efficiencies and costs savings in the petroleum refining and marketing segments. On the refining end, industry officials reported that mergers can help achieve operational efficiencies through the integration of refinery operations and infrastructure. For example, officials reported that a larger refinery system allows firms to use feedstocks and blending stocks across refineries, which can improve efficiencies at individual refineries. In addition, industry representatives reported that purchasing crude oil for multiple facilities can allow refiners to secure volume discounts that yield cost savings. On the marketing end, industry representatives reported that mergers can better position marketers for competition through economies of scale and improved efficiencies. According to one industry official, refiners prefer larger marketers because (1) they are usually a lower credit risk than their smaller counterparts and (2) it is more efficient to sell larger volumes of fuel through fewer entities, because transaction and administrative costs can be minimized. One marketer reported that, after mergers occurred, the larger refiners made it clear that they only wanted to deal with marketers that bought fuel in quantities above a certain minimum. Smaller marketers that were not able to meet these minimums found it difficult to compete, and many were subsequently purchased by other marketers. In addition, some marketer representatives with whom we spoke said that they operate on slim profit margins, as little as 1 cent per gallon, and the economies of scale that can be achieved via mergers help improve profitability. Despite the gains that mergers can provide in the downstream segment, as well as in the upstream and midstream segments, policy makers and industry officials reported that mergers can also allow companies to exercise market power and reduce competition in the industry. We found that the upstream market segment for crude oil production was unconcentrated and remained so between 2000 and 2006. We looked at all the sellers that produce crude oil worldwide because the price of crude oil is set in global markets. We calculated each firm’s relative market share of worldwide crude oil production and then calculated HHIs from 2000 to 2006. We found relatively unconcentrated HHIs (i.e., below 1,000 according to FTC’s merger guidelines) in this segment of the industry and that these numbers remained stable over time, despite the mergers that occurred in this segment (see fig. 8). In addition, we found that individual crude suppliers throughout the world have relatively low market share compared with other suppliers worldwide. Even a relatively large producer such as Saudi Arabia had only about 13 percent of global crude production in 2006, according to our analysis of Oil and Gas Journal data. However, the coordination among global crude producers that are members of the Organization of the Petroleum Exporting Countries cartel can contribute to their ability to exercise market power beyond what the market concentration figures would indicate. Although global crude oil markets appear to be unconcentrated, in some instances smaller, landlocked refineries, such as those in Oklahoma, rely heavily on only local crude producers. Under these circumstances, the crude supplier market would be more concentrated, and there could be more potential for the crude producers to raise prices. We heard from some industry experts and one small, independent refiner that it can sometimes be difficult to purchase crude oil under these circumstances because of the limited choice of suppliers. We found that between 2000 and 2007, in the downstream gasoline refining segment, market regions in the United States were stable and generally moderately concentrated. We analyzed concentration in what experts consider key market regions: Los Angeles, San Francisco, the Gulf Coast, New York Harbor (East Coast), Chicago, Tulsa (or the Mid-continent), and the Pacific Northwest. Although concentration was generally moderate in these regions between 2000 and 2007, the New York and San Francisco regions had concentrations above or near 1,800, which FTC considers highly concentrated (see fig. 9). Petroleum industry experts consider refinery market analysis particularly important because most U.S. refiners have minimal spare capacity, and the barriers to entry for new refiners are high. Between 2000 and 2007, the HHI for the New York Harbor region increased from 1,630 to 2,104, but because foreign and Gulf Coast refineries ship a significant amount of gasoline into the East Coast (around 60 percent of consumption), the high measure of concentration probably overstates the actual concentration for the market. The potential for market power is likely lower than the HHI would indicate because refiners from outside of this region have the ability to challenge potentially anticompetitive behavior from local refiners over longer periods of time by providing lower-priced gasoline. Calculating HHI with these potential competing refiners included would provide a more accurate representation of concentration levels in this region. Between 2000 and 2007, the HHI for the Chicago region went from 1,417 to 1,268, keeping it moderately concentrated throughout the period of our study. In addition, this region—which serves large parts of the Midwest, according to industry experts—also receives shipments of gasoline from the Gulf Coast via pipeline, and, according to our analysis of EIA data, shipments from outside of the region accounted for about 28 percent of the gasoline consumed in the Midwest region. This indicates that numerous refiners outside of the Chicago region help to keep the market supplied and could provide adequate gasoline to prevent long-run price increases. Between 2000 and 2007, the HHI for the Gulf Coast region, which includes refineries in Texas, Louisiana, and Alabama, went from 761 to 938, an increase of 177 points. This region remained unconcentrated throughout our study period and has, by far, the greatest number of refineries. As a result, the Gulf Coast region generally produces more gasoline than it uses, and about two-thirds of it is shipped outside of the region, mostly to the Midwest and East Coast. Between 2000 and 2007, the HHI for the Mid-continent region went from 1,029 to 882, a decrease of 147 points. This region became unconcentrated during our study period. However, some experts mentioned that some Mid-continent refineries in states such as Montana, Utah, and Wyoming primarily supply only their local regions, making these regions subject to potentially more highly concentrated local market conditions rather than lower concentrated regional Mid-continent conditions. In general, the West Coast of the United States was moderately concentrated. Between 2000 and 2007, the Pacific Northwest region was moderately concentrated, although the HHI increased 293 points, from 1,146 to 1,439. In addition, the HHI for the San Francisco region remained in “nearly” highly concentrated territory over the entire span of our study. The HHI for the Los Angeles region went from 1,460 to 1,285, keeping it firmly in the moderately concentrated range between 2000 and 2007. As is the case with the New York Harbor region, West Coast regions have some access to imported gasoline, and gasoline can also move between West Coast regions. This clearly helps to mitigate potential issues of high concentration, according to experts with whom we spoke. Imports to California markets, however, are limited by the state’s unique gasoline specifications and many refineries outside of the state are not able to produce gasoline for California. In our analysis of downstream wholesale gasoline suppliers, we found that most states had a moderately concentrated number of wholesale gasoline suppliers between 2000 and 2007. However, markets for wholesale gasoline marketing may not correspond to states; therefore, in some cases, the relevant geographic market would be either larger or smaller than state boundaries, according to some petroleum industry experts with whom we spoke. Fewer states were unconcentrated or highly concentrated, and this overall trend was fairly stable over time (see fig. 10). In addition, we found that eight states in 2007 were highly concentrated: Alaska, Hawaii, Indiana, Kentucky, Michigan, North Dakota, Ohio, and Pennsylvania (see fig. 11), although we were not able to link concentration levels to gasoline prices. To calculate these market concentrations for wholesale gasoline supply, we used EIA data that contained the gasoline volumes sold in every state, by wholesale supplier. EIA only collects these data by state. We were not able to calculate market concentration in the midstream segment of the petroleum industry, which transports crude oil and refined products throughout the United States, because of a lack of comprehensive data on pipeline and barge ownership and associated transportation markets. In addition, many petroleum product pipelines are considered “common carriers”; therefore, they are subject to FERC rates if they cross state boundaries and state-mandated rates if they remain within state boundaries, which FERC officials told us limits the ability of pipeline owning firms to increase prices anticompetitively. However, in some cases, pipeline firms can apply for “market-based” rates, although they have to demonstrate to FERC that they ship fuel between locations where there are ample shipping alternatives. This is not very often the case, and, according to FERC officials, there are few pipeline firms that charge market-based rates as a result. However, despite the lack of data, experts raised some important considerations regarding competition in the midstream segment. For example, petroleum marketers told us that in some instances, pipeline firms also own the terminals that connect to their pipelines and have the ability to set their own prices for fuel storage or other terminal-related services, potentially leaving shippers with few alternatives but to pay. In addition, according to some oil industry experts with whom we spoke, some pipeline companies are master limited partnerships—publicly traded limited partnerships, not subject to corporate income tax—which may have little interest in the long-term viability of their business, and, according to some industry experts with whom we spoke, may defer maintenance and limit increases in pipeline capacity to maximize profits in the short term. We noted in a 2007 report on energy markets that, in some states, such as Arizona, California, Colorado, and Nevada, there was a systemic lack of pipeline capacity that was insufficient in meeting increases in demand, creating conditions of higher prices and price volatility. Like refining, midstream infrastructure often has very high barriers to entry, thereby making it difficult for new competitors to enter the market. For example, it is difficult to get regulatory permits to build or expand pipelines, and the costs can run $1 million or more per mile, according to pipeline companies and other industry experts. FTC primarily reviews proposed mergers to maintain competition in the petroleum industry, while other federal and state agencies, including FTC, have roles in monitoring petroleum industry markets. FTC does a review to predict the effects of proposed mergers on competition, but generally does not look back to evaluate the actual effects after the merger has been completed, even though experts and FTC agree that postmerger reviews would allow the agency to better inform future merger reviews and to better measure its success in maintaining competition. In addition, the agency also conducts other activities to monitor petroleum product markets, such as monitoring wholesale gasoline prices for evidence of unusual price spikes. Other federal and state agencies also have roles in monitoring petroleum industry markets. In reviewing proposed mergers, FTC follows guidelines that it developed jointly with DOJ for predicting the effects of mergers––including petroleum industry mergers––on competition. The unifying theme in the guidelines is that mergers should not be permitted to enhance a firm’s market power or to make it easier for a firm to exercise market power. The guidelines describe the analytical process that FTC will use in determining whether to challenge a merger, and they outline five broad areas for FTC to consider: (1) defining markets and analyzing concentration, (2) predicting potential adverse effects on competition, (3) evaluating barriers to new market entrants, (4) evaluating potential gains in efficiency, and (5) giving consideration to potentially failing firms. We discuss these five areas in the following text: Defining markets and analyzing concentration: FTC initially defines merging companies’ markets and analyzes their market concentration. To do this, FTC first reviews merging firms’ products; identifies any similar products they sell; and identifies the geographic markets in which the firms operate, which it defines as the area in which a company could monopolize the market and impose a small price increase without competing firms bringing prices back down by adding supply to the market. FTC then determines the industry market share––the percentage of products that companies supply to one geographic market area––and calculates an index of market concentration, HHI, where firms with larger market shares are weighted more heavily. If the proposed merger were to substantially raise HHI, there would be a greater likelihood that one firm, or a small group of firms, could exercise market power and increase consumer prices above competitive levels. This situation may trigger FTC to request more information from the merging firms to look more closely at several factors affecting market competition (see table 5). Predicting potential adverse effects on competition: FTC’s second step is to predict the nature of adverse effects of a merger on competition in the petroleum industry. To do this, FTC examines whether market conditions would be conducive for firms to coordinate or to act unilaterally to raise prices. The analysis of competitive harm at the retail level might involve looking for the presence of firms with different business models than their rivals, which would indicate less likelihood for coordination. For example, FTC noted that the presence of “big-box” retailers that sell discount gasoline and groceries, such as Costco or Wal-Mart, generally boost competition because they tend to sell large volumes of fuel at lower prices than traditional service stations. FTC might allow a merger to take place in a retail market with a large number of such retailers that it would otherwise challenge in a different market. Evaluating barriers to entry for new market entrants: FTC’s third step is to evaluate the barriers to market entry for potential new competitors. When FTC identifies that it is unlikely that new firms could enter a market in a relatively short time, they consider the market to be less competitive and, therefore, would be less likely to approve a merger. FTC staff told us the petroleum industry is generally hard to enter because of the high capital costs; for example, building a new refinery could take 6 years and cost $10 billion, according to estimates for one proposed new facility. In general, FTC staff told us that because of factors like the high barriers to entry in the petroleum industry, they challenge mergers at lower levels of concentration than they do in other industries. As a result, the FTC staff said that they scrutinize the petroleum industry more closely than other industries, while still using the same merger review guidelines. Evaluating potential gains in efficiency: FTC’s fourth step is to evaluate any claims from the merging parties’ that the merger would improve efficiency in the petroleum industry. For example, some mergers have the potential to make the merged firms more efficient in their daily operations by allowing them to achieve economies of scale, and this may result in lower prices for consumers. If FTC determines that a merger could result in substantial efficiency gains, it may allow a merger that would otherwise potentially harm consumers. However, the guidelines acknowledge that these efficiency gains may not be realized in the way that merging firms claim. Considering potential failing assets argument: FTC’s fifth step is to evaluate whether the merger will result in a firm remaining in the market that would have otherwise gone out of business. FTC would be less likely to challenge such a merger if it would allow a firm to remain a viable market participant, according to FTC staff with whom we spoke. To determine the extent of the competitive factors that we have previously discussed, FTC staff told us they work closely with petroleum industry participants, often review thousands of pages of evidence, and work with antitrust officials in the states affected by the merger. The merger review process could last under 30 days if the agency does not request additional information from the merging parties; however the process could last 12 months or more if extensive analysis is needed and the agency issues a second request for more information, according to FTC staff. After analysis of the factors in the guidelines, FTC has three options: (1) allow the merger; (2) challenge the merger in court; or (3) allow the merger with certain remedial actions, such as requiring firms to sell off, or divest, overlapping assets that have the greatest potential to harm competition. For example, in the petroleum industry, this might mean requiring one of the merging firms to sell a product terminal in an area where the merging partner owns one. According to FTC data, between 2000 and 2007, there were 360 mergers in the petroleum industry that were required to file with the agency. After reviewing these proposed mergers, FTC opened investigations in 64 mergers and issued second requests in 24 of them. FTC allowed 9 mergers to proceed with remedial actions, while the threat of agency challenges led to the abandonment of 5 of them. FTC allowed the rest to proceed without modification. To make these decisions, FTC performed prospective merger reviews to predict the effects of the mergers before they were completed. However, we found that after reviewing proposed mergers, FTC does not regularly look back at past decisions to determine the actual effects of the merger on competition or prices. In 2004, we reported that FTC had released its first retrospective review of any kind for approved mergers in the petroleum industry. FTC has since released two additional retrospective reviews of petroleum industry mergers. The first one, in 2004, was of a 1998 joint venture between Marathon Oil Company and Ashland Incorporated; the second one, in 2005, was of a 1999 acquisition of Ultramar Diamond Shamrock Corporation by Marathon Ashland Petroleum; and the third one, in 2007, was of the 1997 acquisition of Thrifty Oil Company by ARCO. According to its published reports on these studies, FTC chose to review these mergers because evidence suggested there was a chance that they might have led to higher gasoline prices in areas affected by the mergers. None of the studies found that the mergers had any adverse effects on gasoline prices, although FTC indicated that the studies provided important lessons that would inform their future merger review work. A number of petroleum industry experts, industry participants, and FTC all view retrospective merger reviews as a potentially valuable part of FTC’s efforts to maintain competition in the petroleum industry. An FTC commissioner, who is now the FTC Chairman, noted in a 2006 article that without retrospective reviews, it is rarely possible to determine whether the assumptions and hypotheses that motivated a merger review decision were sound. Some experts also noted that examining mergers retrospectively can provide valuable insights that FTC can apply during subsequent merger reviews. Specifically, retrospective reviews bring to light any effects that do not occur as predicted. For example, a study that FTC published in 1999 looked back at a number of cases where it had required divestitures in a variety of industries and found that only three- quarters of divestitures succeeded to some degree, which would leave fewer competitors than predicted and potentially harm competition. In addition, as noted in FTC and DOJ’s Merger Guidelines, efficiency gains that could mitigate the harmful effects of a merger may not always be realized. Retrospective reviews would allow FTC to identify such situations, and this could help inform the agency’s future merger reviews. FTC staff told us that if they find anticompetitive behavior in retrospective reviews, they have the ability to pursue corrective action to reintroduce competition into the market. For example, FTC has the power to pursue actions, such as forced divestures or conduct-based remedies, to bring competition back into the market place. In fact, FTC has identified anticompetitive behavior in retrospective merger reviews it conducted in other industries and has taken corrective actions. In 2005, FTC, using results from a retrospective review of a hospital merger in suburban Chicago, found that the merged hospital used market power to set prices in an anticompetitive manner. Using these findings, FTC filed suit and the courts issued numerous cease-and-desist orders to the hospitals, which brought price competition back into the healthcare market according to FTC staff. In addition, some experts with whom we spoke said that retrospective merger reviews would allow FTC to better measure the success of its merger review program. The Government Performance and Results Act of 1993 (GPRA) emphasizes that agencies need to establish and measure performance toward results-oriented goals, which in FTC’s case means that the agency should not measure success by how many mergers it reviews, but rather by whether merger reviews achieved the goal of maintaining competition. Currently, FTC’s key measure of its merger review performance is to determine the number and the value of potentially anticompetitive mergers that it successfully challenged. However, this measure does not involve an evaluation of mergers that ended up being harmful, but that the agency did not challenge after predicting they would be harmless. In addition, in cases where mergers proceed with remedial actions, FTC’s key performance measure indicates a successful outcome, even though remedial actions, such as divestitures, may not always succeed. Using retrospective merger reviews to look at the actual effects of completed mergers on competition would better show whether the program achieved the goal of maintaining competition. However, FTC does not have––and does not plan to develop––formal guidelines or criteria on how often retrospective reviews should occur or how to conduct them, instead the agency relies on an informal approach. For example, staff reported that in the past two retrospective reviews, staff chose to review completed mergers that FTC subjected to careful antitrust investigation, but did not challenge; otherwise, there are no defined guidelines. In the absence of regular retrospective reviews, FTC may not be able to regularly apply lessons learned from past merger decisions to future reviews, assess the performance of its merger review program, or take remedial actions in instances where completed mergers ended up harming competition. FTC staff cited a lack of time and resources as the primary challenge to its ability to conduct retrospective reviews. Specifically, staff reported that it was difficult to devote the time and staff resources required to conduct these types of reviews, and stated that retrospective reviews of mergers in the petroleum industry are important, yet lower priority, compared with other mission-central activities, such as premerger reviews. In addition, according to economists with whom we spoke, developing the statistical models needed to conduct retrospective reviews is complex and time consuming. They indicated that there are numerous factors affecting the price of gasoline that must be controlled for in order to attribute any changes in price to a particular merger. Nonetheless, we have reported in prior work that agencies with limited resources can implement risk-based guidelines to selectively look back at agency decisions. Risk-based guidelines provide criteria for taking action based on the likelihood that agency goals were not met. These would allow FTC to selectively use resources to evaluate past merger decisions in circumstances where it deems there is greater likelihood, and hence risk, that the goal of maintaining competition was not met. In addition to its efforts to maintain competition through merger review, FTC also performs other activities to monitor petroleum markets, including monitoring fuel prices, conducting special investigations, and engaging in consumer protection activities. FTC implemented a price- monitoring program in 2002 for wholesale and retail prices of gasoline in an effort to identify possible anticompetitive activities and determine whether a law enforcement investigation was warranted. The program tracks retail gasoline and diesel prices in 360 cities across the nation and wholesale prices in 20 major urban areas. FTC’s Bureau of Economics staff receives daily data from the Oil Price Information Service (OPIS), receives weekly information from the Department of Energy’s public Gas Price Hotline, and reviews other relevant information that might be reported to FTC directly by the public or other federal or state government entities. FTC uses a statistical model to determine whether current retail and wholesale prices each week are consistent with historical patterns and to alert FTC staff when gasoline prices are out of expected ranges for that region. Staff can then conduct more in-depth analyses to determine whether there are violations of antitrust laws. Since its establishment in 2002, the price-monitoring program has not identified any price anomalies that would violate the antitrust laws; it attributes most price anomalies to refinery or pipeline outages or changes in air quality standards. FTC staff reported that outside economists and FTC staff reviewed the program’s methodology and found it to be effective. FTC’s staff indicated that they also conduct special investigations of the petroleum industry when warranted. Occasionally, such investigations are requested by Congress. For example, in 2006, the agency published a congressionally mandated report entitled Investigation of Gasoline Price Manipulation and Post Katrina Gasoline Price Increase that evaluated price anomalies after Hurricanes Katrina and Rita. This investigation did not find evidence of anticompetitive behavior in any of the industry segments during or after the disruptions. The agency also completed an investigation into gasoline and diesel prices in the Pacific Northwest in 2006 and 2007 that found prices appeared to be consistent with ordinary market conditions. In addition to their special investigations, the agency also publishes various reports on the petroleum industry that are, mainly, agency-driven. For example, in 2004, FTC published a report on mergers and its antitrust enforcement activities in the petroleum industry. Furthermore, the Commission’s Bureau of Consumer Protection has brought actions to protect consumers from false or unsubstantiated advertising claims regarding the effectiveness or energy-saving of fuels or automotive products. In addition, on August 13, 2008, FTC issued a proposed rule that would make it unlawful for any person to engage in fraudulent or deceptive acts in connection with the purchase or sale of crude oil, gasoline, or petroleum distillates to manipulate wholesale petroleum markets. Therefore, fraudulent or deceptive acts—including false reporting to private reporting services or misleading announcements by refineries, pipelines, or investment banks—may be covered by the proposed rule. However, it is not yet clear how this rule will impact FTC’s enforcement or monitoring in petroleum industry markets. Besides FTC, other federal agencies have a role in monitoring petroleum industry markets. Table 6 provides general examples of three federal agencies’ responsibilities regarding petroleum markets. Some states are also involved in monitoring petroleum markets that affect their constituents. FERC has a role in monitoring and regulating petroleum industry markets at the midstream level—where crude oil and petroleum products are transported—by ensuring that all parties have access to common-carrier pipelines. While FERC does not proactively monitor pipeline markets, it regulates the open access to pipelines by determining and enforcing tariffs—that is, the rates charged and the terms under which shippers send their products through the pipelines and the rules governing pipeline access. According to FERC officials, pipeline companies establish their initial rates either (1) by filing an application with FERC requesting a rate based on the total cost-of-service for the pipeline or (2) by proving to FERC that shippers have agreed to pay another proposed rate. As we have previously discussed, FERC also allows some pipelines to charge market- based rates in regions where it deems there is adequate competition. FTC still has the authority to enforce antitrust legislation and review mergers to maintain competition in this segment of the industry. In some instances, FERC can also intervene to prevent potentially anticompetitive behavior. For example, FERC officials cited an instance where a pipeline company denied access to a crude oil producer who wanted to ship high sulfur crude oil out of the Gulf Coast. The pipeline company said that it did not want to have high sulfur crude contaminating its pipeline, although the shipper alleged that the pipeline company was acting in collusion with a rival crude oil producer by restricting access to the pipeline. After receiving the complaint, FERC officials worked with the parties to resolve the matter. The Commodity Futures Trading Commission (CFTC) monitors futures markets to ensure competitiveness and efficiency, and protects market participants against fraud, manipulation, and abusive trading practices. Participants in futures markets, such as the New York Mercantile Exchange, often use futures contracts, which contribute to the smooth functioning of petroleum product markets throughout the United States. Buyers and sellers in the futures markets primarily enter into futures contracts to lock in prices on volatile goods or to speculate rather than to exchange physical goods, which is the primary activity of the spot markets. CFTC has several divisions that monitor and enforce competition in the futures markets. The Division of Enforcement investigates and prosecutes alleged violations of the Commodity Exchange Act and Commission regulations. One example of market manipulation in the crude oil markets occurred in 2003, when one company attempted to manipulate the spot market price of West Texas Intermediate crude oil. The case was brought by CFTC and settled in 2007 for a $1 million civil penalty. In addition, CFTC has created advisory committees to provide input and make recommendations to the Commission on a variety of regulatory and market issues that affect the integrity and competitiveness of U.S. markets. These committees include an Energy Markets Advisory Committee that was created in 2008 to advise CFTC on important new developments in energy markets that may raise new regulatory issues, and on the appropriate regulatory response to protect market competition, increase efficiency, and create opportunities in the futures markets. The Department of Energy’s EIA also has a role in analyzing and monitoring petroleum industry markets. Specifically, EIA collects, analyzes, and forecasts data on the supply, demand, and prices of crude oil and petroleum products, including inventory levels, refining capacity and utilization rates, and product movements into and within the United States. EIA’s reports are prepared independently of Administration policy, and EIA does not provide conclusions or recommendations in its analyses. FTC relies on EIA’s comprehensive and independent data and several state agencies with whom we spoke use these data to review mergers, conduct market concentration analysis, and analyze wholesale and retail gasoline markets. For example, FTC uses EIA data to support enforcement action cases and, in 2007, 33 cases were pursued, the highest number of cases in the last 5 years. In addition to the agencies that monitor petroleum industry markets, the Environmental Protection Agency (EPA) also has a role in helping to maintain the flow of petroleum products during emergency supply crises by providing waivers for refineries to allow them to sell products that would not normally meet environmental standards. For example, after supply disruptions resulting from Hurricanes Katrina and Rita, EPA indicated that it met with local market participants and, following review of the market circumstances, granted waivers on environmental quality specifications. According to EPA, this ensured there were no regulatory obstacles to providing an adequate supply of gasoline and diesel to the affected regions. Most states do not proactively monitor petroleum industry markets, although the level of monitoring varies from state to state, according to the National Association of Attorneys General (NAAG). Some states do not monitor fuel prices or other aspects of the petroleum industry at all, while other states actively monitor market structure or fuel prices on a continual basis. Financial, political, and other factors may be the reason for whether and how actively states monitor petroleum industry markets. State agency officials with whom we spoke described a number of steps they can take in monitoring their petroleum industry markets. First, some states collect and analyze data on the industry—especially at the gasoline wholesale and retail levels. For example, after Hurricane Katrina, according to the Pennsylvania Office of Attorney General, the state decided to monitor retail gasoline prices during that period of reduced gasoline supply. The state ended up bringing charges against retailers that were allegedly setting unfair prices. States may also enact legislation to make it mandatory for companies to provide data on wholesale gasoline sales. For example, Maine implemented a statute called the Petroleum Market Share Act, which requires petroleum wholesalers and refiners to provide annual reports to the attorney general who uses this information to calculate market concentration for fuel suppliers, ensuring that the state has historical data to proactively track market concentrations. Second, states may enact legislation to prosecute unfair practices that lead to very high prices, that is, “price gouging.” According to a study by the Congressional Research Service (CRS), at least 28 states, the District of Columbia, and 2 U.S. territories have some form of price gouging legislation, although several states we spoke with said it was generally difficult to prove that unfair pricing had occurred. Currently, there is no federal price gouging law, but the 110th Congress has proposed several bills that address the issue. Third, most of the states we contacted also develop gasoline pricing reports to inform the public of the changes in the petroleum industry. For example, the state of Washington published a comprehensive interagency report in 2008 to address how gasoline prices have increased over the years and identified in a comparative analysis the different components contributing to the rising prices. Finally, several states often collaborate with FTC on merger reviews because they have local knowledge of the companies and provide expertise that federal agencies may lack. For example, the California Attorney General has worked cooperatively with FTC to review a number of mergers, including large mergers such as the Exxon Mobil merger, and provided legal and technical expertise on the California market, such as knowledge of the intricacies of California pipelines. Overall, states are interested in improving their monitoring of petroleum industry markets in their areas, according to NAAG. Because there are few substitutes for transportation fuels such as gasoline, consumers have little choice but to pay higher prices when they rise. As a result, consumers want assurance that the prices they pay are determined in a competitive and fair marketplace. FTC plays a key role in maintaining petroleum industry competition and in assuring the public that mergers have not led to unfair price increases. Maintaining competition in the petroleum industry requires FTC to fully understand the effects of its merger decisions on competition and fuel prices. While FTC considers the potential effects of mergers during its proposed merger review, the agency does not routinely look back to determine whether the actual effects of the merger reflect what the agency predicted. It is possible that the actual effects of a completed merger could be different and not realized until much later. Without more regular retrospective reviews, the agency does not know whether a completed merger contributed to fuel price increases or decreases or whether the merger improved or harmed competition. In addition, FTC cannot apply lessons learned to future merger reviews and is unable to effectively monitor its own performance in delivering the intended result of “maintained” competition. We believe, along with the experts with whom we spoke, including those at FTC, that regular retrospective analyses would help the agency better understand the actual impacts of mergers. While not all completed mergers would likely warrant retrospective reviews, an approach that uses risk-based guidelines would allow the agency to selectively review key mergers with the goal of maintaining competition in the petroleum industry. To enhance FTC’s effectiveness in maintaining competition in the U.S. petroleum industry, and to make efficient use of FTC’s resources, we recommend that the FTC Chairman lead efforts to (1) conduct more regular retrospective analyses of past petroleum industry mergers and (2) develop risk-based guidelines to determine when to conduct them. We provided a copy of our draft report to FTC for its review and comment. FTC’s Chairman provided written comments, which are reproduced in appendix III, along with our responses. In general, the Chairman commented that the recommendations in this report were consistent with the goals outlined in FTC’s current self-evaluation initiative, and that FTC would consider our recommendations to conduct more regular retrospective analyses of petroleum industry mergers using a risk-based approach along with other recommendations resulting from this initiative. The Chairman also noted that analyzing market concentration is just the starting point in FTC’s antitrust analysis, and emphasized that each merger involves a unique set of facts and other competitive factors that the agency considers. He also noted the difficulties in delineating geographic antitrust markets, and we responded to each of these concerns in appendix III. We clarified other material in this report in response to technical comments by the Chairman as appropriate. We are sending copies of this report to interested congressional committees and the FTC Chairman. Copies of this report will be made available to others upon request. In addition, this report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact us at (202) 512-3841, [email protected], or (202) 512-2642, [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. The objectives of this report were to examine (1) mergers in the U.S. petroleum industry and changes in market concentration since 2000 and (2) the steps that the Federal Trade Commission (FTC) uses to maintain competition in the U.S. petroleum industry, and the roles other federal and state agencies play in monitoring petroleum industry markets. To examine U.S. petroleum industry mergers since 2000, we primarily used merger data that we purchased from John S. Herold, Inc. (J.S. Herold), an independent research and consulting firm that collects data and conducts analyses for the energy sector. J.S. Herold collects information on all publicly announced mergers in the petroleum industry and records key financial and operational data about these mergers in a large database. Prior to purchasing information from this database, we assessed the reliability of these data and found them sufficiently reliable for the purposes of this report. The data purchased from J.S. Herold included extensive information on all petroleum industry mergers from 1991 through 2007, including but not limited to, company names, locations, merger values, and key assets involved in the mergers. The J.S. Herold data were limited to mergers that exceeded $10 million in value, and we limited our review to mergers that were principally located in the United States or that we had reason to believe involved U.S. locations. In addition, we excluded mergers whose main asset was natural gas or a natural gas product as well as mergers that occurred before 2000. For the remaining data, we conducted a variety of analyses to better understand merger activity. These analyses included, but were not limited to, evaluating the number, type, and transaction value of mergers over time as well as evaluating the distribution of mergers across industry segments and subsegments. To better understand and contextualize the results of our analysis of the J.S. Herold data, we also reviewed industry journal articles and conducted interviews with industry officials and experts to better interpret merger activity over time. To examine the rationale for petroleum industry mergers since 2000, we conducted interviews with various representatives from all three segments of the petroleum industry. For information on the upstream segment, we interviewed representatives from large, vertically integrated oil companies as well as a smaller, independent exploration and production company. For information on the midstream segment, we primarily relied on industry publications because midstream operators—including pipeline and tanker operators—were less available for interviews or comment. For information on the downstream segment, we interviewed representatives from vertically integrated companies. In addition, we interviewed a number of other firms operating in the downstream segment, including refiners, marketers, and retailers of petroleum. To better contextualize the information provided in these interviews, we also conducted a literature search of articles that addressed rationales for petroleum industry mergers from 2000 through 2007. Lastly, we interviewed a number of experts— including academics specializing in the petroleum industry or antitrust matters as well as industry representatives—for additional context and information on recent merger activity. To calculate market concentrations (HHI) at the upstream level, we purchased data from the Oil and Gas Journal containing crude oil production information for the 100 largest international companies between 2000 and 2006. These data included state-owned oil companies, such as those in Iran and Saudi Arabia. After conducting data reliability assessments, such as looking for out-of-range and missing values, we found these data to be sufficiently reliable for our use in calculating upstream HHI. We used a single global market to calculate HHI in this segment because, according to experts with whom we spoke, crude oil prices are set on world markets. Because of the lack of readily accessible data on the midstream petroleum industry, which simultaneously includes the pipeline, barge, and trucking industries, we were not able to calculate HHI in this segment. To calculate HHI for the refining segment we defined geographic markets (see app. II for more details on how we defined geographic markets), and then estimated the gasoline production capacity of United States refineries by using annual data from EIA that contained capacity information for refineries in the United States and the Caribbean. After conducting data reliability assessments, such as looking for out-of-range and missing values, we found these data to be sufficiently reliable for our use in calculating HHI. In general, if a refinery did not have one of these three units, we did not assign it any gasoline production capacity. Refineries without one of these units most likely do not produce any gasoline (i.e., asphalt refineries). However, there are some older refineries that use hydrocrackers to produce gasoline, rather than the more common FCC. We worked with EIA to identify these and then used data from company publications to account for the gasoline production capacity of these refineries. We did not include capacities from the isomerization units or cokers because, according to our discussion with EIA officials, these units feed into reformers and FCCs, which we capture in our approach. volumes sold in every state by wholesale supplier. After discussions with EIA officials, we found these data to be sufficiently reliable for EIA to calculate HHI for us. To examine FTC’s processes for ensuring competition in the petroleum industry, we interviewed FTC staff on several occasions regarding their merger review procedures. In addition, we asked FTC staff a series of questions in writing, and they provided us with detailed written responses. We also analyzed a number of official agency documents. Finally, we interviewed experts in the fields of antitrust and industrial organization, and petroleum industry officials who provided us with comments on FTC’s merger review procedures. To identify federal and state agencies’ role in monitoring petroleum industry markets, we conducted interviews and reviewed studies and reports from several federal and state agencies. We chose certain federal agencies to be studied on the basis of their regulatory involvement with the various segments of the petroleum industry. We contacted the Federal Energy Regulatory Commission, Commodity and Futures Trading Commission, Environmental Protection Agency, Department of Transportation, Department of Energy, EIA, and Federal Maritime Commission because of their potential involvement in monitoring petroleum industry markets. We reviewed the federal agency Web sites, press releases, and reports published by these agencies before the interviews to understand their role in monitoring the petroleum industry markets and whether they would be good resources for further exploration. We conducted interviews with several federal officials from the aforementioned federal agencies. The questions were tailored to effectively obtain the information necessary to understand their involvement in monitoring. To identify state agencies’ role in monitoring petroleum industry markets, we conducted interviews and reviewed studies and reports from several state attorneys general and energy-specific agencies. We chose the states to be studied on the basis of whether we thought they (1) had significant crude oil extraction and production; (2) had numerous refineries; (3) had isolated markets; (4) had coastal port terminals; and (5) were, according to expert opinion, progressive or proactive, or both, in monitoring competition in the segment of the petroleum industry active in their state. We also wanted to make sure that we had adequate geographic coverage of the country. The selected state attorneys general were from Alaska, California, Connecticut, Louisiana, Maine, New York, Pennsylvania, Texas, and the state of Washington. During the interviews with the selected states, we conducted a snowball sample where we asked our many interviewees if they knew of other states that had proactive market monitoring. We also asked if their state had an energy commission or another authority to monitor the petroleum industry. We also interviewed an official with the National Association of Attorneys General (NAAG), who catalogues information on individual state roles in monitoring petroleum industry competition. Before each interview, we reviewed the state agency Web sites, press releases, and reports published by the agencies and developed semistructured questions that addressed monitoring petroleum industry markets. We conducted this performance audit from March 2007 to September 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. To define geographic refinery regions for the purposes of calculating HHI, we collaborated with staff from the Oil Price Information Service (OPIS), EIA, and FTC who had expertise on petroleum product markets, and who helped us to assign individual refineries to market regions. Our methodology used “spot markets” as the basis for defining geographic refinery regions. Spot markets reflect the historical grouping of U.S. refineries into seven refining centers. Energy traders consider gasoline available for delivery at these refining spot markets in order to price gasoline that is bought and sold at the wholesale level, and gasoline production in these refining groups drives prices on the spot markets. The seven spot markets in the United States, which we used as our refinery regions, are in Los Angeles, San Francisco, the Gulf Coast, New York Harbor, Chicago, Tulsa (or Mid-continent), and the Pacific Northwest. Most refineries in each region are able to supply gasoline to the larger geographic region that surrounds them, which also includes areas to which they are linked via pipeline. In addition, gasoline can flow between regions, although, according to experts with whom we spoke, under normal market conditions (i.e., absent a supply disruption, such as a hurricane) refiners are usually unable to ship gasoline to other regions in short notice to discipline the market because of the following reasons: Gasoline specifications in one region may not be suitable for other regions. Many refiners operate at near maximum capacity and may not have the ability to increase production to meet additional demand in other markets. Transportation options between regions may be nonexistent or too costly to ensure a profit with only small price differentials between gasoline in each region. When pipelines are present, it may not be feasible to use them because of the following: The refinery may not have a link to the pipeline. The refinery may not have the rights to an adequate allocation of pipeline space. It may take too long for gasoline shipped via pipeline to arrive in another region, and experts with whom we spoke said that the industry is reluctant to respond to what it often perceives as temporary price increases. Despite these factors that support using spot markets as the basis for geographic refinery regions, there are still limitations. For example, according to experts, there are certain areas of the country where isolated refiners cannot send gasoline to any of the seven spot market centers and end up selling it locally, which suggests that there are, in addition to the seven spot markets, other smaller local refinery markets. Experts from EIA, FTC, and OPIS, mentioned that refineries in states like Alaska and Hawaii primarily supply their local regions, making them more subject to local market conditions, rather than larger regional factors. As a result, we removed these refineries from our calculations. In a number of cases, we counted a refinery in two spot markets. For example, according to the experts with whom we spoke, refineries in Bakersfield, California, can supply either the San Francisco region or the Los Angeles region. However, according to one OPIS official, gasoline suppliers still tend to predict their future fuel costs based on prices in one of the seven regional spot markets that we described, even though the fuel they buy may come only from a local refinery. In addition, FTC staff indicated that for merger review purposes, they would define more specific geographic markets, often using private company data, although they indicated that the markets we defined here are still useful for looking at U.S. refinery market concentration more broadly. The following are GAO’s comments on the Federal Trade Commission’s letter dated September 17, 2008. 1. The FTC Chairman commented that HHI concentration numbers are just the starting point for merger antitrust analysis and noted that FTC considers other competitive factors when examining a merger. We agree with these points and note that we calculated petroleum industry market HHIs to shed light on the general level of concentration in the petroleum industry, not to conduct an antitrust analysis regarding specific mergers or market regions or to provide a guide for conducting antitrust assessments. Such analysis would have involved looking at other competitive factors as noted in the Chairman’s letter, such as barriers to entry or examining mergers between firms that operate in different, but related, segments of the industry, which was beyond the scope our work. Nonetheless, we believe that concentration analysis for broader regions, which may not exactly correspond to antitrust markets, is useful for assessing regional concentration in the same way that national-level indicators of unemployment or Gross Domestic Product growth are useful in examining the economic health of the country. Our report, therefore, indicates that there are regions that may have more or less potential for firms to exercise market power, and we did not draw further conclusions about the impact of market concentration on competition in any given region. In addition, FTC conducted concentration analysis with similar market definitions for such purposes in its 2004 report on competition in the petroleum industry. We made no changes to the report for this comment. 2. The FTC Chairman commented that each merger involves analyzing a unique set of facts, such as examining barriers to entry or efficiency gains. We agree that each merger inevitably involves a unique set of circumstances and correspondingly unique considerations. We added language to the report to clarify this point. 3. The FTC Chairman commented on the difficulties of delineating geographic antitrust markets and noted, in this regard, that we did not include a large number of suppliers that could affect the New York Harbor refining market. We recognize the difficulty of delineating markets and understand that the use of spot markets for evaluating market concentration in the refining subsegment includes a number of limitations, most notably that spot market regions do not necessarily correspond to geographic regions that could be used as antitrust markets. On the basis of our consultations with experts at OPIS, EIA, and the Chairman’s own experts at FTC, and for the reasons highlighted in appendix II, we decided that spot market HHIs were appropriate for analysis of the general state of concentration in the refining industry. We also recognize that there are other factors, in addition to market concentration, that are important in evaluating the competitive conditions in a given market. In our reporting of spot market concentrations we presented other factors that were unique to each spot market, including—for example, in the New York market— the sizable shipments of gasoline into this market from foreign and Gulf Coast refineries. Since the draft report already noted these limitations, which were raised by the FTC Chairman, we made no change for this comment. 4. The announcement of FTC’s self-evaluation initiative, The FTC at 100: Into Our Second Century, which FTC enclosed with this letter, can be found at: http://www.ftc.gov/speeches/kovacic/080618ftcat100.pdf. In addition to the individuals named above, Godwin Agbara (Assistant Director), Daniel Haas (Assistant Director), John Karikari (Assistant Director), Michael Kendix, Christopher Klisch, Robert Marek, Micah McMillan, Mark Metcalfe, Michelle Munn, Bintou Njie, Alison O’Neill, Frank Rusco, Rebecca Sandulli, Jeremy Sebest, and Barbara Timmerman made important contributions to this report.
During the late 1990s, many petroleum companies merged to stay profitable while crude oil prices were low, and in recent years mergers have continued. Congress and others have concerns about the impact mergers might be having on competition in U.S. petroleum markets. The Federal Trade Commission (FTC) has the authority to maintain competition in the petroleum industry and reviews proposed mergers to determine whether they are likely to diminish competition or increase prices, among other things. GAO was asked to examine (1) mergers in the U.S. petroleum industry and changes in market concentration since 2000 and (2) the steps FTC uses to maintain competition in the U.S. petroleum industry, and the roles other federal and state agencies play in monitoring petroleum industry markets. In conducting this study, GAO worked with petroleum industry experts to delineate regional markets and to develop estimates of refinery gasoline production capacity in order to calculate market concentration. GAO used public and private data as well as interviews for its analyses. More than 1,000 U.S. mergers occurred in the petroleum industry between 2000 and 2007, mostly between firms involved in crude oil exploration and production. According to experts and industry officials, mergers in this segment were generally driven by the challenges associated with producing oil in extreme physical environments, such as deepwater, as well as increasing concerns about competition with national oil companies and access to oil reserves in regions of relative political instability. Industry officials from the segments of the petroleum industry that transport, refine, and sell petroleum products reported that mergers were generally driven by the desire for greater efficiency and cost savings. Despite these gains, mergers have the potential to enhance a firm's ability to exercise "market power," which potentially allows it to raise prices without being undercut by other firms. GAO measured market concentration with an index that FTC uses, where market regions with few, large firms are considered to be highly concentrated and have a greater potential for market power. Conversely, market regions with many smaller firms are considered to have low or moderate concentration and generally have less potential for firms to exercise market power. GAO found that market concentration changed little but varied by industry segment and market region. GAO found that market concentration among firms involved in crude oil exploration and production was low and stable between 2000 and 2006, while concentration among refiners was generally moderate across those years. Regarding wholesale gasoline suppliers on a state-by-state basis, 35 states were moderately concentrated in their number of wholesale gasoline suppliers in 2007, and this number was fairly stable from 2000. GAO found that the following 8 states had highly concentrated wholesale gasoline supplier markets in 2007: Alaska, Hawaii, Indiana, Kentucky, Michigan, North Dakota, Ohio, and Pennsylvania. While FTC reviews evidence and considers a number of competitive factors to predict a merger's potential effects on competition in its analyses of proposed mergers, it does not regularly look back at past merger decisions to assess the actual effects of the merger on competition or prices after the merger has been completed. Although these reviews can be resource intensive, experts, industry participants, and FTC agree that regular retrospective reviews would allow the agency to better inform future merger reviews and to better measure its success in maintaining competition. In addition to FTC's efforts in reviewing proposed mergers, other federal agencies, including FTC, and some states also monitor aspects of petroleum industry markets. For example, the Federal Energy Regulatory Commission monitors petroleum product pipeline markets and regulates pipeline rates accordingly.
In part to improve the availability of information on and management of DOD’s acquisition of services, Congress enacted section 2330a of title 10 of the U.S. Code, which required the Secretary of Defense to establish a data collection system to provide management information on each purchase of services by a military department or defense agency. The information to be collected includes, among other things, the total dollar amount of the purchase and the extent of competition provided in making the purchase. In 2008, Congress amended section 2330a to add a requirement for the Secretary of Defense to submit an annual inventory of the activities performed pursuant to contracts for services for or on behalf The inventory is to include a of DOD during the preceding fiscal year.number of specific data elements for each identified activity, including the function and missions performed by the contractor; the contracting organization, the component of DOD administering the contract, and the organization whose requirements are being met through contractor performance of the function; the funding source for the contract by appropriation and operating agency; the fiscal year the activity first appeared on an inventory; the number of full-time contractor employees (or its equivalent) paid for performance of the activity; a determination of whether the contract pursuant to which the activity is performed is a personal services contract; and a summary of the information required by section 2330a(a) of title 10 of the U.S. Code. As implemented by DOD, components are to compile annual inventories of activities performed on their behalf by contractors and submit them to AT&L, which is to formally submit a consolidated DOD inventory to Congress no later than July 31. Once compiled, the inventory is to be made public and within 90 days of the date on which the inventory is submitted to Congress, the secretary of the military department or head of the defense agency responsible for activities in the inventory is to review the contracts and activities for which they are responsible and ensure that any personal services contracts in the inventory were properly entered into and are being performed appropriately; that the activities in the inventory do not include inherently governmental functions; and to the maximum extent practicable, that activities on the list do not include any functions closely associated with inherently governmental functions. In January 2011, Congress amended section 2330a(c) of title 10 of the U.S. Code to specify that the Under Secretaries of Defense for Personnel and Readiness; Acquisition, Technology and Logistics; and the Office of the Comptroller are responsible for issuing guidance for compiling the inventory. Section 2330a(c) was also amended to state that DOD is to use direct labor hours and associated cost data reported by contractors as the basis for the number of contractor FTEs identified in the inventory, though it provided that DOD may use estimates where such data are not available and cannot reasonably be made available in a timely manner. The Ike Skelton National Defense Authorization Act for Fiscal Year 2011, Pub. L. No. 111-383, § 321. provide for the use of the inventory by the military department or defense agency to aid in the development of its annual personnel authorization requests to Congress and in carrying out personnel policies; ensure that the inventory is used to inform strategic workforce provide for appropriate consideration of the conversion of activities to planning; facilitate the use of the inventory for budgetary purposes; and performance by government employees. Section 931 of the National Defense Authorization Act for Fiscal Year 2012 also mandated that DOD use the inventories when making determinations regarding the appropriate workforce mix necessary to perform its mission. In addition to the laws and guidance that govern the compilation of the inventory and the inventory review processes, Congress also added section 2463 to title 10 of the U.S. Code, which requires P&R to develop guidelines and procedures to ensure that consideration is given to using DOD civilian employees to perform functions that are currently performed by a contractor—a process generally referred to as in-sourcing—and new functions. In particular, these guidelines and procedures are to provide special consideration for, among other things, in-sourcing functions closely associated with inherently governmental functions that contractors are currently performing, or having DOD civilian employees perform new requirements that may be closely associated with inherently governmental functions. Congress required the Secretary of Defense to make use of the inventories created under section 2330a(c) of title10 of the U.S. Code for the purpose of identifying functions that should be considered for performance by DOD civilian employees under this provision. DOD issued initial in-sourcing guidance in April 2008 and additional guidance in May 2009 to assist DOD components in implementing this legislative requirement. Further, the National Defense Authorization Act for Fiscal Year 2010 provided for a new section 115b in title 10 of the U.S. Code that requires DOD to annually submit to the defense committees a strategic workforce plan to shape and improve the civilian workforce. Among other requirements, the plan is to include an assessment of the appropriate mix of military, civilian, and contractor personnel capabilities. P&R is responsible for developing and implementing the strategic plan in consultation with AT&L. The act also added section 235 to title 10 of the U.S. Code, which requires the Secretary of Defense to include information in DOD’s annual budget justification materials regarding the procurement of contracted services. Specifically, the legislation requires for each budget account to identify clearly and separately (1) the amount requested for the procurement of contract services for each DOD component, installation, or activity and (2) the number of contractor FTEs projected and justified for each DOD component, installation, or activity based on the inventory and associated reviews. DOD’s fiscal year 2013 budget guidance to DOD components requires the budget estimates to be informed by the fiscal year 2010 inventory for contracted services. Collectively, these statutory requirements mandate the use of the inventory and the associated review process to help identify functions for possible conversion from contractor performance to DOD civilian performance, support the development of DOD’s annual strategic workforce plan, and specify the number of contractor FTEs included in its annual budget justification materials. Figure 1 illustrates the relationship among the related statutory requirements. DOD made a number of changes to improve the consistency of the fiscal year 2010 inventory, but it continued to rely primarily on data collected in FPDS-NG for the inventory for all defense components other than the Army and the TRICARE Management Activity. As such, DOD acknowledged that the factors that limited the utility, accuracy, and completeness of using FPDS-NG remained. In November 2011, DOD submitted to Congress a plan that included instructions to the military departments and DOD components to document contractor FTEs and begin the collection of contractor manpower data. DOD officials noted that developing a common data system to collect and house contractor manpower data would be challenging given the different requirements of the military departments and components. Consequently, DOD does not expect to be able to fully collect contractor-reported direct labor information until fiscal year 2016. Further, DOD has not established milestones or time frames for the development and implementation of the data system nor has it specified how it will obtain the remaining required data, such as identifying the requiring activity and all functions and missions performed by the contractor, to meet the legislative inventory requirements. DOD’s approach to compiling its fiscal year 2010 inventory was similar to what DOD used for its fiscal year 2009 inventory. AT&L officials noted, however, that they had implemented several changes to improve the fiscal year 2010 inventory’s utility. For example, AT&L did the following: Centrally prepared and provided each component with a list that reflected the specific categories of services that were to be included in the inventory to provide greater consistency among DOD components. In contrast, DOD components compiled their own contract lists for the fiscal year 2009 inventories. Increased the detail available on the services provided by using product and service codes at the four-digit level rather than at the broader, one-digit level used in the fiscal year 2009 inventory. For example, in the fiscal year 2009 inventory, contracts for dentistry services were reported under the broader category of medical services, which had an average cost of about $107,000 per FTE. In fiscal year 2010, dentistry services were reported separately from the medical services category with an average cost of $89,000 per FTE. Updated labor rates to account for changes in service costs. These rates are based on costs, by product and service code, derived using fiscal year 2010 information from the Army’s CMRA. In addition, DOD components were allowed to update or revise the contract lists provided by AT&L as appropriate. For example, Air Force officials stated that for the fiscal year 2010 inventory they used their financial system to cross-walk the contractual, financial, and requiring activity information with the information that was provided by AT&L. The updated information, according to Air Force officials, allowed them to include all Air Force-funded service contracts awarded by non-DOD agencies, provided greater fidelity in the inventory and enabled the Air Force to use the inventory to inform the development of budget justification materials. Further, AT&L in cooperation with DOD components aligned the product and service code functions to missions by organizing their spending into six portfolios developed by the Office of Defense Procurement and Acquisition Policy. AT&L officials stated that this alignment was intended to provide better organization and visibility into services being acquired and what missions they support. AT&L also sought to identify the “requiring activity” down to the major command level by using the “funding office” as a surrogate measure. AT&L officials, however, acknowledged that the requiring activity does not always correspond to the funding office. Collectively, according to AT&L officials, the additional level of detail will provide more accurate costs associated with the services being acquired and could potentially aid in better planning and budgeting for service acquisitions. In compiling DOD’s 2010 inventory, however, DOD officials continued to rely primarily on data collected in FPDS-NG for all defense components other than the Army and the TRICARE Management Activity. As such, DOD officials acknowledged that the factors that limited the utility, accuracy, and completeness of using FPDS-NG remained. These limitations include not being able to identify and record more than one type of service purchased for each contracting action entered into the system, not being able to capture any services performed under contracts that are predominantly for supplies, not capturing service contracts awarded on behalf of DOD by non-DOD agencies, not being able to identify the requiring activity specifically, and not being able to determine the number of contractor FTEs used to perform each service. As with the fiscal year 2009 inventory, AT&L authorized the Army to continue to use its CMRA data system. CMRA is intended to capture data directly reported by contractors on each service performed at the contract line item level, including information on the direct labor dollars, direct labor hours, total invoiced dollars, the functions and mission performed, and the Army unit on whose behalf contractors are performing the services. In instances where contractors are performing different services under the same order, or are performing services at multiple locations, they can enter additional records in CMRA to capture information associated with each type of service or each location. Under its approach, unlike the Air Force and the Navy, the Army included all categories of research and development services in its inventory and identified the services provided under contracts for goods. To report the number of contractor FTEs, the Army indicated that it divided the number of direct labor hours reported by a contractor in CMRA for each service performed by 2,088, the number of labor hours in a federal employee work year. For other data elements in its inventory, such as the funding source and contracting organization, the Army relied on the Army Contract Business Intelligence System and updates from resource managers, contracting officer’s representatives, and other officials. Overall, DOD reported that in fiscal year 2010, 23 components submitted inventories and estimated that about 623,000 contractor FTEs provided services to DOD under contracts with obligations totaling about $121 billion. In comparison, for fiscal year 2009, DOD reported that 22 components submitted inventories and estimated that about 767,000 contractor FTEs provided services to DOD under contracts with obligations totaling about $155 billion. DOD officials cautioned against comparing the number of contractor FTEs for fiscal year 2009 and fiscal year 2010 given the differences in the estimating formula, the changes in reporting for the research and development category, and other factors. Over the past year, DOD initiated efforts to collect manpower data directly from contractors. These efforts were, in part, in response to Congressional direction in section 8108 of the Fiscal Year 2011 Defense Appropriations Act, which made $2,000,000 available to both the Air Force and the Navy for leveraging the Army’s CMRA system to document contractor FTEs and meet all the requirements of section 2330a(e) and section 235 of title 10 of the U.S. Code. It further required the military departments and DOD components to submit their plans for reporting contractor FTEs no later than June 15, 2011. DOD did not meet this deadline, but it submitted an interim response to Congress in July 2011 detailing a time frame for DOD components to complete and submit their individual plans. In August 2011, the Navy and the Air Force submitted their plans to leverage the Army’s CMRA data system. In their plans, the Navy and the Air Force indicated that they would begin developing requirements for a contractor manpower data collection system, but noted that implementation would not begin until the end of fiscal year 2012 or later. Navy and Air Force officials said they wanted to ensure that the data collection system they implemented had the necessary capability to inform other DOD workforce initiatives, such as in-sourcing, and to meet the information technology requirements set by their military departments. Subsequently, in September 2011, DOD components started submitting their plans for reporting contractor FTEs. Some DOD components noted that they would begin modifying existing and future contracts to require the collection of manpower data directly from contractors starting as early as October 2011 and that it would take about a year to modify all their existing contracts. Components also indicated that after the contracts were modified, they would begin collecting the data by conducting manual data calls, implementing the Army’s CMRA system, or using other internal processes. The Navy did not include a time frame for modifying contracts, while the Air Force estimated that it would take approximately 5 years to modify all its contracts. As of February 2012, 43 of the 44 DOD components, including the Army, Navy, and Air Force, had submitted their plans to Congress. Subsequent to submitting these individual plans, DOD continued to revise its approach, as both P&R and AT&L officials expressed concerns about aspects of the timing and approach reflected in the components’ plans. For example, P&R officials noted that the Navy’s August 2011 plan to leverage the Army’s system to collect manpower data directly from contractors “lacks clear and decisive actions and milestones to meet the requirement.” Further, AT&L officials noted that requiring contractors to provide contractor manpower data will require approval from the Office of Management and Budget, as provided for under the Paperwork Reduction Act. In November 2011, DOD issued a department-wide plan intended to meet the legislative inventory requirements, including those for collecting contractor manpower data and documenting contractor FTEs. DOD plans to establish a common data system to collect and house contractor manpower data for the entire department and develop a comprehensive instruction on the development, review, and use of the contracted services inventories. To do so, the Office of the Deputy Chief Management Officer, P&R, and other stakeholders formed a working group to help develop and implement the data system and ensure that it leverages existing solutions, such as the Army’s CMRA system. DOD’s November 2011 plan noted that the Army currently has reporting processes and an infrastructure in place to comply with section 2330a of title 10 of the U.S. Code. The plan, however, indicates that DOD will not have a common data system in place throughout the department until fiscal year 2016. P&R officials indicated that although discussions among the working group have begun, obtaining concurrence from military departments and components on the capability of a common data system may delay implementation. As part of these efforts, DOD submitted an emergency processing request to the Office of Management and Budget on the Paperwork Reduction Act on December 16, 2011. DOD officials subsequently were informed that this request would likely not be approved. Consistent with the requirements of the Paperwork Reduction Act, DOD posted a notice in the Federal Register on February 7, 2012, seeking public comment on its plans to begin collecting direct labor information and other data on DOD contracts. DOD indicated that after it reviews the comments received by the March 23 deadline, a number of other actions will need to be taken before DOD can begin collecting such data. DOD officials further indicated that they will need to assess the impact these events will have on the actions and timeframes identified in their November 2011 plan. The Army, which previously received approval from the Office of Management and Budget to collect certain contract data from contractors using its CMRA system, received a 3-year extension of this approval on December 15, 2011. P&R and AT&L issued guidance on the submission and review of the fiscal year 2011 inventory on December 29, 2011. This guidance indicates that for fiscal year 2011, the Office of Defense Procurement and Acquisition Policy will provide a data set—at the four-digit product and service code level derived from FPDS-NG—to each DOD component with acquisition authority. Further, the guidance provides a different formula for DOD components to use for estimating the number of contractor FTEs paid for the performance of an activity based on the amount of direct labor hours provided by a contractor under each product and service code. Since the Navy, the Air Force, and most DOD components do not collect direct labor hours directly from contractors, the guidance indicates that DOD components may use the best available data or a variety of methodologies, singularly or in combination, to estimate direct labor hours. These methodologies include collecting direct labor hour information from contractors, collecting direct labor hours as reported by the contracting officer’s representative for the service during fiscal year 2011, referencing the independent government cost estimate or contractor technical proposals to extrapolate hours for services provided in fiscal year 2011, reporting information collected from contract invoices, or calculating the number of contractor FTEs by using information extrapolated from the manpower data collected by the Army from its contractors. Under this plan, the Army will continue to use its CMRA system and other established tools and processes for preparing and submitting its inventory. DOD officials noted that DOD intends to develop a comprehensive DOD instruction for the development, review, and use of the service contract inventories. This instruction, which DOD officials indicate will be issued to inform the fiscal year 2013 inventory, is expected to shift primary responsibility for the inventories from the acquisition community to the manpower/personnel community at each DOD component. DOD officials also indicated that the instruction will require that in compiling the inventory of contracted services, all DOD activities are expected to report all services provided in support of or to benefit a DOD component, regardless of the source of the funding or acquisition agent. Additionally, all DOD activities will include in new contracts, or task and delivery orders, the requirement to collect manpower information directly from contractors. In our January 2011 report, we recommended that DOD develop a plan of action, including anticipated time frames and necessary resources, to facilitate the department’s intent of collecting manpower data and to address other limitations in its current approach to meeting inventory requirements. DOD concurred with our recommendations. DOD’s November 2011 plan and December 2011 guidance represent steps in the right direction to meet the legislative requirements and implement our recommendation, but neither document contains milestones or time frames for the development and implementation of a common software and hardware data system to collect and house contractor manpower data. Further, while these efforts address the collection of contractor manpower data, they do not specify how DOD will obtain the remaining required data, such as identifying the requiring activity and all functions and missions performed by the contractor, to meet the legislative inventory requirements. Military departments’ required reviews of their fiscal year 2009 inventories were incomplete. Navy headquarters officials had no assurance that their commands conducted the required reviews, and we found no evidence at the Navy commands we contacted that the required reviews were conducted. Army and Air Force reviews of their contracted services identified 2,026 instances in which contractors were performing inherently governmental functions. We found that contractors continued to perform functions that were identified as inherently governmental in 8 of the 12 Army and Air Force cases we reviewed. In some of these cases, the Army took steps in response to the inventory review process, such as transferring responsibility to military personnel. In contrast, we also found that some contractors continued to perform inherently governmental functions. For example, Army officials cited difficulty in hiring DOD civilians caused by DOD’s decision to freeze civilian FTE levels at the fiscal year 2010 level as hindering their ability to resolve instances identified during the inventory review process. Moreover, contracting and program officials were unaware that the inventory review process had identified functions under their contracts as inherently governmental. The absence of guidance that provided for clear lines of responsibility and accountability for conducting, documenting, and addressing the results of the reviews contributed to these outcomes. The military departments’ reviews of the fiscal year 2009 inventories were incomplete as the Navy did not conduct a review. The Army and Air Force identified 1,935 and 91 instances, respectively, in which contractors were performing inherently governmental functions. The variation in the number of cases reported by the Army and the Air Force may reflect differences in their approaches to conducting the inventory reviews. Table 1 summarizes the number of inherently governmental functions identified by the military departments through their fiscal year 2009 inventory reviews. The Navy issued guidance in September 2010 requiring its commands to conduct a fiscal year 2009 inventory review. The commands were to provide a letter within 45 days that certified that they had completed a review, identified the number of contracts with inherently governmental functions, and provided a corrective action plan. We found no evidence at the commands we contacted that the required reviews were conducted. For example, Fleet Forces Command officials were not aware of a required inventory review and did not recall guidance being issued. Similarly, officials from the Navy’s Space and Naval Warfare Systems Command stated that they did not recall receiving guidance from Navy headquarters to conduct a review of the fiscal year 2009 inventory. Navy headquarters officials did not follow up to ensure that the required reviews were completed and acknowledged that they could not verify whether Navy commands completed the fiscal year 2009 inventory review. The Army used a centralized approach that included a headquarters-level review of all functions performed by contractors. The Army, for its headquarters-level review, established the Panel for Documentation of Contractors, which consists of officials from the Office of the Assistant Secretary of the Army for Manpower and Reserve Affairs along with headquarters officials from the acquisition and manpower planning communities. Army guidance directs the commands to provide data to the panel, including descriptions of the functions being performed by contractors, the organizational unit for which each function is performed, and an assessment of whether those functions are inherently governmental. The panel reviews information provided by the commands and makes an independent determination to assess whether the functions are inherently governmental. According to panel officials, function descriptions do not always provide insight into the day-to-day activities of contractors, sometimes making it difficult to accurately distinguish inherently governmental functions from those that are closely associated with inherently governmental functions. In instances where there was a difference of opinion on the appropriate assessment of a function, however, panel and command officials reported that they engaged in further discussion in order to reach agreement. Additionally, the Army’s final 2010 acquisition review chartered by the Secretary of the Army identified the Army’s systems coordinator function as inherently governmental.include representing program managers at the Pentagon, acting as a liaison with Congress, preparing principal staff officers for systems reviews, writing background papers for military staff, and representing The systems coordinator responsibilities system program managers on integrated product teams. We found that the panel reviewed 19 of the 26 instances identified by the Army during the 2009 inventory review where contractors were performing the systems coordinator function. An Army manpower official stated that the panel process did not identify the remaining 7 instances. In contrast, we found that the Air Force used a decentralized inventory review approach, which delegated primary responsibility for the review of its inventory to its major commands and components. In January 2010, the Secretary of the Air Force issued guidance instructing its major commands and components to conduct an initial review of its fiscal years 2008 and 2009 inventories of contracted services. According to an Air Force inventory official, a headquarters review of the initial information submitted by the commands found that approximately 40 percent of the fiscal year 2009 contracts included for review did not contain adequate responses to the required review elements. Because of challenges experienced during the initial review, the Secretary of the Air Force issued additional guidance in October 2010 requiring major commands to complete the review of fiscal year 2009 service contracts that may have been missed. To do so, the Air Force headquarters-level acquisition office provided each major command and component with a spreadsheet to review that contained its portion of the department’s service contracts from the beginning of fiscal year 2009 through August 2010. This guidance instructed the organizations to determine, among other things, whether the activity performed under the contract was an inherently governmental function. In addition to the inventory review, however, this effort was to help implement the Secretary of Defense’s direction to reduce service support contractors, and inform budget justification initiatives as they pertained to contractor employees. As a result of this process, the Air Force identified 91 instances of contractors performing inherently governmental functions, the majority of those instances for work performed for the Air National Guard and the Air Force Space Command. Financial management and contracting officials responsible for conducting the reviews at the Air National Guard and Air Force Space Command, however, cited concerns with the accuracy and completeness of the inventory review data. For example, at the Air National Guard, the inventory review efforts were conducted by individuals within the financial management office. During the course of the inventory reviews, however, at least one staff member responsible for the inventory review had left the organization. When we spoke to an Air National Guard official in December 2011, she noted that when she received the spreadsheet from Air Force headquarters, 37 functions on the list were already identified as inherently governmental. Air Force officials, however, provided us with documentation to indicate that other individuals within the Air National Guard had made the determinations in earlier reviews of the inventory data, but these determinations were not communicated to the official. Further, officials at both commands stated that the time Air Force headquarters allowed for the review was not sufficient to review each contract and make an informed determination. By reviewing their inventories of contracted services, the Departments of the Army and Air Force identified instances in which contractors were performing inherently governmental functions, but the departments did not ensure that corrective actions were fully implemented. Several options are available to DOD when contractors are in fact performing such functions, including modifying the statement of work to ensure that the work performed is not inherently governmental, assigning responsibility for that work to government personnel, or divesting or discontinuing the work. In 8 of the 12 cases we reviewed at the Army and the Air Force, however, contractors continued to perform functions that the military departments had identified as inherently governmental during the fiscal year 2009 inventory reviews (see table 2). In 4 cases, contractors are no longer performing the functions that had been identified as inherently governmental. In 3 of these cases, according to officials, the Army took steps in response to the inventory review process, including ensuring that the work was performed by government personnel. In the fourth case, contracting and program officials were not aware that this function had been identified as inherently governmental during the review process, but noted that the function had already been in-sourced by the time they became aware of the determination. In 2 instances where contractors were performing the Department of the Army systems coordinator function under task orders, program officials reported that the Army transferred responsibilities for these functions to military personnel. In one of these cases, a contracting official noted that he had initially limited the period of performance on the task order to a 1-year base period with 6-month options because of concern at the time of award that this function was, at the very least, closely associated with inherently governmental functions and because he was aware that the Army wanted to convert this function to a civilian position. A military officer replaced the contractor on October 31, 2011, when the first 6-month option expired. In another case involving a $2.1 million warehouse support contract at Army’s Training and Doctrine Command, a command official clarified that the function performed by the contractor was not inherently governmental. She further explained that she believes the performance of this service was identified as such by the Panel for Documentation of Contractors because the function description included the term “warehouse supervisor”. The command official responsible for tracking resolutions determined that the function in question involved a contractor supervising his own employees not other government employees. She reported that it took 2 years working with panel officials to reach agreement and revise the panel’s determination. The remaining case involved a contractor providing analytical support for planning, programming, and budgeting matters under a $470,000 contract at the Air Force’s Air National Guard. In this case, we interviewed a program official and a contracting official in November 2011 and December 2011, respectively, to determine if they were aware that the inventory review process had identified functions under this contract as inherently governmental. They stated that they were not aware of this determination, but noted that the contract had expired in September 2010. The program official further noted that Air National Guard had in-sourced all functions previously performed under this contract. In 8 of the cases we reviewed, however, contractors were still performing inherently governmental functions, as identified during the inventory review process, at the time of our review, for a variety of reasons. For example: In 4 instances where it had been determined that contractors had been performing the Department of the Army systems coordinator function, contractor employees continued to perform these duties. According to an Acquisition Support Center official, in June 2011 the command requested authorization to replace the contractor employees with military personnel. He noted, however, that the command had not received authorization at the time of our review. He explained, the alternative is to in-source the function and fill the positions with civilian personnel. A function, however, may now only be in-sourced if the Secretary of the Army personally approves it. In February 2011, the Secretary of the Army suspended all approved in- sourcing actions that had not yet been completed and instituted a new in-sourcing request and approval process. The command official reported that preparing the in-sourcing package, which includes a concept plan, a workload and funding profile analysis, a business case, and a contractor inventory review, is a lengthy process, but acknowledged that the command had not submitted this package as of January 2012. In 2 other cases—1 at the Army’s Acquisition Support Center and 1 at the Air Force’s Air National Guard—contracting and program officials were unaware that the inventory review process had identified functions under their contracts as inherently governmental. One case involved a $1.1 million contract at Acquisition Support Center for engineering support in which the contractor employee provided technical expertise and coordination with program office staff, other military departments, Congress, and private companies, among other duties. The other case involved a $409,000 Air National Guard contract for financial analytical support. In both cases, program officials stated that even though the original contracts had expired, contractors continued to perform the same functions under subsequent contracts. In another case at the Air National Guard, the inventory review process identified a function under a $120,000 task order to provide advice and advocacy on Air National Guard positions and programs to the Air Staff and other Air Force major commands as inherently governmental. In this case, the Director of the Contracting Division, with responsibility for this task order, stated that she first became aware that the function was identified as inherently governmental in October 2010 but disagreed with the determination. She was not aware of any process in the Air Force or Air National Guard to resolve the disagreement. When the task order expired in May 2011, she renewed the function under a separate task order. In the remaining case, involving a $6.1 million information technology support contract at Army Training and Doctrine Command’s Defense Language Institute, a “project manager” function had been identified as inherently governmental. A command official noted that the contractor employee was no longer performing the function because the contract had expired in March 2010. When we reviewed contract documents, however, we found that the contract had been extended to March 2011. Further, the Defense Language Institute had entered into a memorandum of agreement with the Naval Postgraduate School to provide the same technology support services. According to a program official, a contractor employee is still performing the same function under a Navy contract. In addition to our case studies, Army Manpower and Reserve Affairs officials acknowledged that they are aware of at least 1 instance, included in the Panel for Documentation of Contractors review process, in which contractors continue to perform functions that the Army identified as inherently governmental. In this case, 47 contractors, including 2 investigators, make up the entirety of a police force at U.S. Army Kwajalein Atoll in the Marshall Islands. These contractors perform all duties expected from a police force, including patrolling, issuing citations, making arrests, and investigating misdemeanors. According to Manpower and Reserve Affairs officials, command officials disagreed with the determination, but on February 22, 2010, the Army Deputy General Counsel for Operations and Personnel issued a legal opinion that concluded that certain functions performed by the contractors were inherently governmental and could not be performed by a contractor. According to Manpower and Reserve Affairs officials, contractors continue to perform these inherently governmental functions. They also noted that DOD’s decision to freeze civilian FTEs at fiscal year 2010 levels is an impediment to resolving the performance of these inherently governmental functions by contractors. To address compliance issues with the inventory review and provide additional guidance on the process, the Acting Under Secretaries of Defense for Acquisition, Technology, and Logistics and Personnel and Readiness, jointly issued guidance for the fiscal year 2011 inventory review on December 29, 2011. The guidance specifies that military departments and defense components must review at a minimum 50 percent of all contracts, task orders, delivery orders, or interagency acquisition agreements listed in the military departments’ and defense components’ inventories for a given fiscal year. While conducting the reviews of contracts, the guidance states that the military departments and defense components should also review how the contracts are performed and administered, as well as the organizational environment in which they are being performed. After a review is complete, the military departments and defense components will now be required to certify that they have completed a review and submit a letter to P&R with the following information: an explanation of the methodology used to conduct the review and criteria for selection of contracts to review; the results of the review, including identifying any inherently governmental functions, closely associated with inherently governmental functions, or unauthorized personal services contracts; a plan for divesting or realigning functions for contracts that were identified as inherently governmental; and an explanation of the steps taken to ensure appropriate government control and oversight for functions that were identified as closely associated with inherently governmental. The guidance, however, does not clearly establish lines of accountability and responsibility within the military departments and defense components for conducting the inventory reviews and addressing instances where contractors are identified as performing inherently governmental functions. Congress has mandated that DOD use the inventory of contracted services and the associated review process to help DOD ensure that contractors are performing work that is appropriate, to support development of DOD’s annual strategic workforce plan, and to specify the number of contractor FTEs included in DOD’s annual budget justification materials. As such, it is essential that the inventories contain comprehensive, accurate, and actionable data for each service performed. DOD, with the exception of the Army, has much further to go in addressing the requirements for compiling and reviewing the inventories of contracted services. DOD made incremental improvements to its process to address some of the previously identified limitations when it compiled its fiscal year 2010 inventory, but it has not resolved the fundamental issue of how to collect the required data to meet the legislative inventory requirements, including manpower data directly from contractors. DOD took a significant step in November 2011 to identify objectives for collecting contractor manpower data from contractors, but DOD indicates that implementation will not be complete until 2016. Given the potential value and importance of compiling a complete and accurate inventory, it would benefit DOD to move more expeditiously. DOD’s plan, however, does not specify time frames or milestones to measure its progress toward developing an enterprisewide data system to collect contractor manpower data, even though it acknowledged that reaching agreement on that approach would be challenging. We therefore reiterate our prior recommendation that DOD’s plans include milestones and time frames to gauge progress in meeting the inventory requirements. The Army and Air Force conducted inventory reviews, but the wide variation in the number of instances of contractors performing inherently governmental functions raises the question as to how much of the variation is due to the different approaches used to conduct the reviews. Further, the Navy was unable to provide assurance that it actually conducted the statutorily required review of its fiscal year 2009 inventory. This underscores the need for greater accountability and management attention. The absence of guidance, at all levels, providing clear lines of responsibility for conducting, documenting, and addressing issues identified during the fiscal year 2009 inventory review process contributed to instances in which contractors continued to perform functions identified as being inherently governmental in 8 of the 12 Army and Air Force cases we reviewed. Army officials also cited challenges with DOD’s decision to freeze civilian FTEs at fiscal year 2010 levels and the in-sourcing process as complicating their efforts to resolve issues identified during their inventory reviews, including those instances at Kwajalein Atoll. Such challenges, however, do not justify the continued use of contractors to perform inherently governmental functions, in several cases more than a year after the issue was initially identified. DOD’s December 2011 guidance will require the military departments and defense components to certify that they have conducted the required reviews, but the guidance does not clearly establish lines of accountability and responsibility within the military departments and defense components for doing so. DOD’s experience in conducting the fiscal year 2009 review demonstrates the importance of guidance that provides for clear lines of authority, responsibility, and accountability if DOD is to use the inventories to help identify and mitigate the risks posed by using contractors to perform certain functions. To address these issues we are making the following three recommendations: To improve the execution and utility of the inventory review process, we recommend that the Secretary of Defense ensure that the military departments and defense components issue guidance to their commands that provides clear lines of authority, responsibility, and accountability for conducting an inventory review and resolving instances where functions being performed by contractors are identified as inherently governmental functions. To ensure that the six instances we reviewed in which the Army identified that contractors were still performing functions it deemed inherently governmental, as well as those at Kwajalein Atoll, have been properly resolved, we recommend that the Secretary of the Army review these functions, determine the status of actions to resolve the issues, and, as appropriate, take necessary corrective actions. To ensure that the two instances we reviewed where contractors were still performing functions the Air Force had previously identified as inherently governmental are properly resolved, we recommend that the Secretary of the Air Force review these functions, determine the status of actions to resolve the issues, and, as appropriate, take necessary corrective actions. DOD provided us with written comments on a draft of this report, stating that it largely agreed with our recommendations and is committed to continue to improve its accounting for contracts for services. More specifically, DOD agreed with two recommendations and partially concurred with one recommendation. DOD’s written response is reprinted in appendix II. DOD also provided technical comments, which were incorporated as appropriate. DOD concurred with our recommendations to address instances we reviewed in which the Army and Air Force identified that contractors were still performing functions deemed inherently governmental. DOD noted that it will work with the Army and Air Force to ensure corrective actions, as appropriate and necessary, are taken. DOD partially concurred with our recommendation that the Secretary of Defense ensure that the military departments and defense components issue guidance to their commands that provides for clear lines of authority, responsibility, and accountability for conducting an inventory review, and for resolving instances where functions being performed by contractors are identified as inherently governmental. DOD agreed it was imperative for the components to do so, but noted that its December 2011 guidance, while not prescribing individual management practices, requires component heads to certify completion of and results from the reviews. Further, DOD noted that as defense components vary in size and mission, the need for individual components to have organization- specific guidance should not be mandated but rather determined by each component head. Our recommendation does not intend that DOD prescribe individual component management practices or mandate organization-specific guidance. We agree that each component should institute guidance that fits its mission and needs and that the precise nature of each component’s guidance may vary in scope and detail. Our work found, however, that the absence of guidance at the military department-level that provides for clear lines of authority, responsibility and accountability contributed to the shortcomings and challenges encountered during the military departments’ review of their fiscal year 2009 inventories. Given these results, we continue to believe that it would be prudent for DOD to obtain sufficient assurance that the military departments’ and components’ guidance covers the areas—including those enumerated above—that provide the foundation for conducting a meaningful review. DOD’s December 2011 guidance, while a step in the right direction, does not provide such assurances. DOD also noted in its comments that the Office of Management and Budget had indicated via e-mail that it would disapprove DOD’s request for an emergency waiver to the Paperwork Reduction Act. Consequently, consistent with the requirements of the Paperwork Reduction Act, DOD posted a notice in the Federal Register in February 2012 seeking public comment on its plans to begin collecting direct labor information and other data on DOD contracts. DOD indicated that after it reviews the comments received by the March 23 deadline, a number of other actions will need to be taken before DOD can begin collecting such data. As a result, DOD officials told us they will need to assess the impact these events will have on the actions DOD identified in its November 2011 plan and, as such, will not be able to develop additional milestones until this is done. We modified the text in the report to reflect this updated information. We are sending copies of this report to the Secretary of Defense, Secretary of the Air Force, Secretary of the Army, and interested congressional committees. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact us at (202) 512-4841 or [email protected] or (404) 679-1808 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made contributions to this report are listed in appendix III. Section 803(c) of the National Defense Authorization Act for Fiscal Year 2010 directs GAO to report for 3 years on the inventory of activities performed pursuant to contracts for services that are to be submitted by the Secretary of Defense, in 2010, 2011, and 2012, respectively. To satisfy the mandate for 2011, we assessed (1) the progress the Department of Defense (DOD) has made in addressing limitations in its approach when compiling the fiscal year 2010 inventories on contracted services and in developing a strategy to obtain manpower data and (2) the extent to which the military departments addressed issues with contractors performing inherently governmental functions identified during reviews of their fiscal year 2009 inventories. As the military departments accounted for 83 percent of the reported obligations on service contracts and 92 percent of the reported number of contractor full-time equivalents (FTE) in the fiscal year 2009 inventories, we focused our efforts on the Army, Navy, and Air Force. To assess the progress DOD has made in addressing limitations in its previous approach when compiling the service contract inventories, we reviewed relevant guidance related to the inventory compilation processes and interviewed cognizant officials from the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics (AT&L), Office of Defense Procurement and Acquisition Policy; the Office of the Under Secretary of Defense for Personnel and Readiness; and the departments of the Army, Navy, and Air Force. We assessed changes made at the department level between the approaches for fiscal year 2009 and fiscal year 2010, but we did not assess the extent to which the change in approach affected the estimated number of contractor FTEs reported in the inventories. For the fiscal year 2010 inventory, AT&L continued to rely on data from the Federal Procurement Data System- Next Generation (FPDS-NG) for most defense components other than the Army and the TRICARE Management Activity. As with the fiscal year 2009 inventory, the Army continued to use its Contractor Manpower Reporting Application (CMRA) that reports manpower data collected directly from its contractors. We reviewed Army guidance, interviewed officials responsible for the inventory compilation, and reviewed our prior work to describe the Army’s inventory compilation process. We did not independently assess the accuracy or reliability of the underlying data supporting the Army’s, Navy’s, or Air Force’s fiscal year 2010 inventory. Our January 2011 report, however, identified limitations associated with using FPDS-NG data as the basis for the inventory. As such, we reviewed our prior work to identify these limitations and discussed with AT&L officials what steps, if any, they had taken to address these limitations. To assess DOD’s progress in developing a strategy to obtain manpower data, we reviewed DOD’s efforts to respond to congressional direction reflected in section 8108 of the 2011 Defense Appropriations Act, which required the Navy and the Air Force to submit plans to leverage the Army’s CMRA system and the military departments and components to submit plans to Congress for reporting contractor FTEs no later than June 15, 2011. We reviewed and assessed the 43 plans submitted by the military departments and defense components as of February 2012 as well as DOD’s November 2011 plan, which included instructions to the military departments and DOD components to document contractor FTEs and begin the collection of manpower data. We interviewed officials from AT&L’s Office of Defense Procurement and Acquisition Policy, and the Office of the Under Secretary of Defense for Personnel and Readiness to obtain their views on the department’s plans to collect contractor manpower data. To assess the extent to which the military departments addressed instances in which contractors were performing inherently governmental functions, we used data from the fiscal year 2009 inventory reviews, which was the most current review available at the time we began our work. To do so, we reviewed a total of 12 instances in which contractors were identified as performing inherently governmental functions. We selected two Army commands and one Air Force component based in part on the number of such instances they had identified. For the Army, we randomly selected 3 instances from the Army’s inventory review data at the Training and Doctrine Command and 3 instances at the Acquisition Support Center. These data included determinations made by the Army’s Panel for the Documentation of Contractors, which identified the functions as being inherently governmental, closely associated with inherently governmental, or unauthorized personal services, as well as the commands’ determination of how each function was resolved. For the 6 Army cases we randomly selected, we reviewed the inventory review data and interviewed officials responsible for the fiscal year 2009 inventory review process. We subsequently eliminated 3 of the instances we randomly selected because they were identified as closely associated with inherently governmental functions or unauthorized personal services. In addition, we reviewed 6 instances where contractors were performing the duties of Department of the Army systems coordinators. Army’s 2010 acquisition review chartered by the Secretary of the Army determined that these positions were inherently governmental. For each of these cases, we reviewed the contract files and interviewed program and contracting officials responsible for these contracts to determine the extent to which DOD took action to resolve instances of contractors performing inherently governmental functions. The Air Force provided data to us in September 2011 that summarized the results of its fiscal year 2009 inventory review process, including functions being performed by contractors that it identified as inherently governmental. Pursuant to the Secretary of the Air Force’s October 2010 guidance, the inventory review was to include all contracts from the beginning of fiscal year 2009 through August 2010. From these data, we determined that the Air National Guard had the largest number of instances in which the review process identified contractors as performing inherently governmental functions, and randomly selected 3 instances the Air Force had identified as including inherently governmental functions at the Air National Guard. We also interviewed officials from the Air National Guard and the Air Force Space Command about their inventory review process. In November 2011, Air Force officials provided us a revised data set that excluded contracts awarded from October 2009 through August 2010, including the contracts at the Air National Guard. Since these contracts were reviewed as part of the Air Force’s fiscal year 2009 inventory review process as directed by the Secretary of the Air Force and included functions identified as inherently governmental, we included them in our review. At the time we initiated our work, Navy headquarters officials did not have the results of their fiscal year 2009 inventory review process available and subsequently acknowledged that they were uncertain whether their commands conducted the required reviews. Consequently, we considered DOD’s fiscal year 2010 in-sourcing data that were reported to Congress in September 2011. From the Navy’s in-sourcing data, we selected Fleet Forces Command and Space and Naval Warfare Systems Command for further review based on the number of positions they reported as in-sourced based on contractor performance of an inherently governmental function. We also contacted Naval Sea Systems Command, the largest of five Navy systems commands, to discuss whether the command had conducted an inventory review but were informed that the command had not done so. We did not review individual Navy contracts because Space and Naval Warfare Systems command officials stated that it was not possible to track the positions to specific contracts. Also, we found that the commands subsequently reported that the functions were not inherently governmental and were in-sourced for other reasons, such as to provide Navy personnel career progression opportunities. For the 12 cases we included in our review, we compared the information that the Army and Air Force provided regarding the contracts they reviewed with information in the contract files and found the data sufficiently reliable for the purposes of our work. We did not, however, independently assess whether the functions the military departments identified were in fact inherently governmental. Further, the results of our analysis are not generalizable to all instances where contractors were performing inherently governmental functions identified by the military departments. We conducted this performance audit between July 2011 and April 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contacts named above, Timothy DiNapoli, Acting Director; MacKenzie Cooper; Julia Kennon; John Krump; Angie Nichols- Friedman; and Guisseli Reyes-Turnell made key contributions to this report.
DOD relies on contractors to perform many functions, which can offer benefits for DOD. GAO’s work has shown that reliance on contractors to support core missions, however, can place DOD at risk of contractors performing inherently governmental functions. (2) the extent to which the military departments addressed instances of contractors performing functions identified as inherently governmental during reviews of their fiscal year 2009 inventories. GAO reviewed DOD guidance, interviewed acquisition and manpower officials, and assessed 12 instances from a nongeneralizable sample in which the Air Force and Army determined that contractors had performed inherently governmental functions The Department of Defense (DOD) made a number of changes to improve the utility of the fiscal year 2010 inventory, such as centrally preparing contract data to provide greater consistency among DOD components and increasing the level of detail on the services provided. DOD, however, continued to rely primarily on the Federal Procurement Data System-Next Generation (FPDS-NG) for the inventory for most defense components other than the Army. As such, DOD acknowledged a number of factors that limited the utility, accuracy, and completeness of the inventory data. For example, FPDS-NG does not identify more than one type of service purchased for each contract action, provide the number of contractor full-time equivalent personnel, or identify the requiring activity. As before, the Army used its Contractor Manpower Reporting Application to compile its fiscal year 2010 inventory. This system collects data reported by contractors on services performed at the contract line item level, including information on labor hours and the function and mission performed. DOD officials noted that the Army’s current process complies with legislative requirements. In January 2011, GAO recommended that DOD develop a plan with time frames and the necessary resources to facilitate its efforts to collect contractor manpower data and address other limitations in its approach to meeting inventory requirements. DOD concurred with these recommendations. In November 2011, DOD submitted to Congress a plan to collect contractor manpower data. DOD officials noted that developing a common data system to collect and house these data would be challenging given the different requirements from the military departments and components. Consequently, DOD does not expect to fully collect contractor manpower data until fiscal year 2016. DOD’s plan, however, does not establish milestones or specify how it will meet the legislative requirement to identify the requiring activity and the function and missions performed by the contractor. Military departments’ required reviews of their fiscal year 2009 inventories of contracted services were incomplete. Navy headquarters officials had no assurance that their commands conducted the required reviews, and GAO found no evidence at the commands it contacted that the required reviews were conducted. Army and Air Force inventory reviews identified 1,935 and 91 instances, respectively, in which contractors were performing inherently governmental functions, though this variation may reflect differences in the departments’ approaches to conducting the reviews. In 8 of the 12 Army and Air Force cases GAO reviewed, contractors continued to perform functions the military departments identified as inherently governmental. The absence of guidance that provided for clear lines of responsibility for conducting, documenting, and addressing the results of the reviews contributed to these outcomes. Further, Army officials cited difficulty in hiring DOD civilians caused by DOD’s decision to freeze civilian full-time equivalents at fiscal year 2010 levels. DOD issued guidance in December 2011 that will require the military departments and components to certify that they have conducted the required reviews. The guidance, however, does not clearly establish lines of accountability and responsibility within the military departments and defense components for conducting the inventory reviews and addressing instances where contractors are identified as performing inherently governmental functions. GAO recommends that the military departments and components develop guidance that provides for clear lines of authority, responsibility, and accountability for conducting an inventory review and that the Army and Air Force resolve known instances of contractors performing inherently governmental functions. DOD largely agreed with GAO’s recommendations.
In 2011, according to Corrosion Office officials, the Corrosion Office established the TCC program, a research and development program that is the successor to the University Corrosion Collaboration (UCC) pilot program, established in 2008. The TCC program builds on efforts of the UCC pilot program by expanding and formalizing the role of military personnel, such as representatives at military research labs, in problem identification, research project development, project monitoring, and product transition. DOD relies, in part, on researchers at universities and military research labs to identify, pursue, and develop new technologies that address the prevention or mitigation of corrosion affecting military assets. The Corrosion Office oversees the TCC program, advocates for TCC funding as part of the President’s annual budget, funds TCC projects based on available budget, convenes and chairs the panel that selects projects, and regularly communicates progress and status of the TCC program to the Corrosion Control and Prevention Executives (hereafter referred to as Corrosion Executives).The Corrosion Office’s 2014 DOD Corrosion Prevention and Mitigation Strategic Plan includes an objective to increase the number of people educated in corrosion engineering and management. With regard to the TCC program, the strategic plan cites the education goal of producing individuals with education and skills that will form the future core of DOD’s corrosion community. The current TCC program includes 15 universities—civilian institutions and military academic institutions to conduct projects in corrosion issues Appendix II shows the current list of TCC-affiliated universities and labs, as of February 2014. As of February 2014, according to the Corrosion Office, it has provided funding to universities and labs for 126 projects since the program began in 2008. —and nine military research labs that support the universities. The universities associated with the TCC program are responsible for, among other things, assisting in the identification of research and development opportunities; conducting TCC projects in collaboration with other universities and DOD technical personnel at the military research labs; and producing products that can be transitioned to systems Additionally, military research development, or prototype demonstration. labs are responsible for, among other things, identifying areas of research and development that can mitigate current DOD corrosion problems or address future problems; working with the universities participating in the TCC program to develop sound and focused research and development projects; and monitoring and guiding work in progress at the universities. The term “university” includes civilian institutions and military academic institutions. Civilian institutions include public universities, a private university, and a commercial organization that conduct research. SAFE Inc. is the commercial organization that, among other things, conducts TCC projects for the U.S. Air Force Academy. Military academic institutions include military service academies (i.e., the U.S. Military Academy at West Point; the Naval Postgraduate School; and the Air Force Institute of Technology, a graduate school). The evolution of technology comprises four main phases: (1) Understanding the concept/obtaining a better understanding of the concept (i.e., research), (2) Technology Product Development, (3) Technology Demonstration, and (4) Implementation. The TCC program falls under the first phase and the military demonstration projects, which we previously reported on, fall under phase 3. Military demonstration projects differ from the TCC projects because they are more mature than TCC projects. The Corrosion Office oversees processes to select, approve, and fund projects within the TCC program. Selection: According to the Corrosion Office, the office convened a panel of experts chaired by the Deputy Director of the Corrosion Office and including personnel from the Corrosion Office and the Director of the research center at the U.S. Air Force Academy. The panel of experts evaluates civilian institutions’ white papers and more- detailed formal proposals to select institutions’ project proposals for final approval by the Corrosion Office. The Corrosion Office directly evaluates proposals submitted by military academic institutions and labs to select entities for approval. Approval: The Corrosion Office’s Deputy Director approves the final list of TCC projects to be conducted by civilian institutions and military academic institutions, and the final list of military research labs that support the institutions. Funding: When civilian institutions’ proposals are approved, the Corrosion Office provides funds—primarily using Research, Development, Test, and Evaluation funds, and some Operation and Maintenance funds—to the contracting division within the U.S. Air Force Academy, which pays the researchers at the civilian institutions to conduct research. Researchers at the military academic institutions and military research labs receive funding directly from the Corrosion Office. The Corrosion Office monitors the projects through, among other things, TCC annual reviews and status reports. Corrosion Office officials stated that when a university completes its research, university project managers send a final report about the results to the Corrosion Office. DOD’s Corrosion Office has established procedures for managing some aspects of the TCC program, but it has not documented procedures for approving TCC projects. Specifically, for civilian institutions, the Corrosion Office has documented procedures for selecting projects, but it has not documented procedures for approving these projects. Additionally, the Corrosion Office has not documented procedures for selecting and approving projects for military academic institutions that conduct the research and military research labs that support civilian and military institutions. The Corrosion Office revised its DOD Corrosion Prevention and Mitigation Strategic Plan in January 2014 to include the minimum requirements and other factors to consider when selecting projects to be funded under the TCC program. Prior to the revised 2014 strategic plan, according to Corrosion Office officials, they included the process for selecting projects in the TCC Definitions Document, which was created and shared with the participants of the TCC program in 2010. Corrosion Office officials stated that they updated the contents of the definitions document and included the information in the revised strategic plan. However, we found that procedures for managing key aspects of the TCC program, such as procedures for selecting and approving TCC projects, are not fully documented in the 2014 revised strategic plan or other documentation, such as management directives, administrative policies, or operating manuals for some projects. According to the Standards for Internal Control in the Federal Government, all transactions and other significant events, such as the procedures for managing the TCC program, need to be clearly documented and readily available for examination. As part of internal control standards, documentation should appear in management directives, administrative policies, or operating manuals, and may be in paper or electronic form. In addition, these standards state that all documentation and records should be properly managed and maintained. For civilian institutions, Corrosion Office officials stated that they use the U.S. Air Force Academy’s documented process, called the Broad Agency Announcement (hereafter referred to as the BAA process), which includes written instructions or procedures for selecting projects, but the office has not documented how the Deputy Director of the Corrosion Office approves the final list of projects. These officials stated that under the BAA process, the U.S. Air Force Academy publicly announces the Corrosion Office’s intent to fund TCC projects that focus on researching technologies to help prevent and mitigate corrosion affecting military assets. Corrosion Office officials stated that they use the BAA process to review and evaluate white papers and formal proposals. According to representatives from civilian institutions, they provide white papers and formal proposals in response to the BAA. A Corrosion Office official stated that procedures associated with selecting projects, such as identifying that the Corrosion Office will convene and chair the project selection panel, are partially documented in the TCC Definitions Document. Corrosion Office officials also stated that their 2014 strategic plan identifies that the office will convene and chair the panel. According to Corrosion Office officials, to review the white papers, the Corrosion Office convenes a panel of experts, and the panel uses requirements identified in the BAA to evaluate which civilian institutions will be notified to submit formal proposals. These officials stated that the panel selects white papers for additional development, requests the civilian institutions to provide formal proposals, and evaluates formal proposals based on requirements published in the BAA. Specifically, the panel identifies which formal proposals will be considered for final approval by the Corrosion Office and sends the selected proposals to the Corrosion Office’s Deputy Director for final approval. Once the projects are approved, according to agency officials, the Corrosion Office provides TCC funds to the U.S. Air Force Academy, which pays the researchers at the civilian institutions through cooperative agreements and grants. For military academic institutions, Corrosion Office officials stated that they have established a process for selecting projects, which is identified in DOD’s TCC Definitions Document and its 2014 strategic plan that include requirements and other factors to consider when selecting some TCC projects for approval. Officials stated that the requirements apply to both civilian institutions and military academic institutions. However, the Corrosion Office has not documented the type of information required from military academic institutions, including project proposals and steps taken by decision makers to select and approve projects. The Corrosion Office evaluates military proposals to select some projects for approval based on requirements identified in the definitions document and 2014 revised strategic plan. The Corrosion Office approves the final list of TCC projects based on the proposals it receives from the military academic institutions and provides funds directly to the researchers at the military academic institutions to conduct research. The Corrosion Office uses Military Interdepartmental Purchase Requests to transfer funds between the Corrosion Office and the military academic institutions. For military research labs, the Corrosion Office described how it selects and approves the labs to, among other things, work with the civilian and military academic institutions participating in the TCC program to develop sound and focused research and development projects, and to monitor and guide work in progress at the civilian and military institutions. However, the Corrosion Office has not documented procedures, such as steps taken by decision makers to select and approve the military research labs, in the Corrosion Office’s documents or guidance, such as the strategic plan. According to Corrosion Office officials, they review information from the labs regarding an explanation of how the labs plan to assist the civilian and military institutions in conducting TCC projects and select the highest priority activities within the available budget. For example, according to a military research lab representative, it reviews the civilian institutions that participated in the program and their TCC efforts and indicates to the Corrosion Office which institutions it can best support. The Corrosion Office determines the final list of labs that will receive funding and provides funds directly to the researchers at the military research labs to pay for their participation in the TCC research. As previously stated, according to the Standards for Internal Control in the Federal Government, all transactions and other significant events, such as the procedures for managing the program, need to be clearly documented. We found that the Corrosion Office has documented its procedures for selecting military demonstration projects in its 2014 strategic plan but has not fully documented its procedures for managing key aspects of the TCC program in keeping with federal standards for internal control. According to Corrosion Office officials, the procedures for some aspects of the TCC program are not documented because the program is still evolving and they would like flexibility to enable innovation in determining how to manage the program. Corrosion Office officials acknowledged that their procedures for selecting TCC projects could be included in their definitions document. Without fully documenting its decision-making procedures for selecting and approving projects, the Corrosion Office cannot demonstrate how projects were selected and approved for the TCC program. Corrosion Office officials provided the amount of funds for the TCC program for fiscal years 2008 to 2013, but lacked readily available or consistent documentation to support some of the funding data. As a result, it is unclear what the Corrosion Office has spent on the TCC program. Section 2228 of Title 10 of the United States Code requires the Corrosion Office to include a description of the specific amount of funds used for the TCC program and other corrosion-prevention and mitigation activities (for the prior year) in its annual corrosion budget report. In addition, Standards for Internal Controls in the Federal Government state that agencies should clearly document transactions and other significant events and the documentation should be readily available for examination. Also, federal internal control standards state that agencies should have accurate and timely recording of transactions and events. Specifically, we found that the Corrosion Office could not fully support or readily show documentation for some of the TCC funding data it provided us. For fiscal year 2008, Corrosion Office officials could not provide supporting documentation for the approximate $6.8 million that it reported spending on the TCC program in that year. Corrosion Office officials stated that they used a different financial management system in 2008 and did not maintain documents from that time frame. For fiscal years 2009 to 2013, we attempted to verify the office’s funding data using the Military Interdepartmental Purchase Requests that the Corrosion Office uses to transfer TCC funding to the military institutions. However, some of the documentation the officials provided did not fully reconcile with the final funding data they provided. For example, purchase requests for fiscal years 2012 and 2013 showed amounts greater (by $1.3 million and $15,000, respectively) than the figures the Corrosion Office provided. According to the Corrosion Office, the purchase requests they provided may not fully document specific TCC funding because in some cases the purchase requests included funds for other corrosion efforts comingled with these funds. Further, officials said that one would have to review other supporting documents, such as statements of work, to isolate TCC funds. Regarding the inconsistent funding amounts for the same time frame, in a prior GAO mandated review of the Corrosion Office’s 2013 budget report, we obtained information from the Corrosion Office and found that it spent $69.5 million for the TCC program from fiscal years 2009 to 2012.May 2013 (at the beginning of our current review), the Corrosion Office briefed us that it spent $67.7 million on the TCC program for fiscal years 2009 through 2012. When we brought it to the office’s attention that this figure differed, officials asked for additional time to verify their data. In February 2014, officials provided us a revised funding amount of $67.5 million for fiscal years 2009 through 2012, and in March 2014, they provided us a funding amount of $72 million for these same years. Overall, the difference from the first amount and the final amount is about $2.5 million for these same years. According to Corrosion Office officials, the funding amounts differed because prior to 2013, the office was not required to track and report TCC funds separately from other corrosion- related activity funds. The office also cited a lack of resources to track and maintain funding data when the program was initiated. We also attempted to independently verify TCC funding by comparing the funding data the Corrosion Office provided us with data provided from a recipient of some of the funds. Specifically, we obtained funding information from a university that managed some projects that the Corrosion Office included in its TCC funding from fiscal years 2008 to 2012. For example, for fiscal year 2010, the Corrosion Office indicated that it provided $6.3 million to the university for education projects, but the university presented documents showing that the Corrosion Office provided $6.4 million (a difference of about $70,000). When we brought this to the attention of Corrosion Office officials, they agreed to follow up with the university to reconcile the differences in the funds, but have not provided an explanation. Overall, we were unable to verify what the We found that some funding data did not match for these years. According to Corrosion Office officials, these education projects—known as National Center for Education and Research on Corrosion and Materials Performance (NCERCAMP) projects—were conducted at the University of Akron. NCERCAMP projects include research, training, and program integration activities. For fiscal years 2008 to 2013, all of the funds the Corrosion Office provided the university for NCERCAMP projects were accounted for as part of TCC funding. However, Corrosion Office officials stated that they have reconsidered how they account for these funds and for current and future budgets they plan to account for some NCERCAMP funds under other corrosion- prevention and mitigation activities. Corrosion Office has spent on the TCC program. Without tracking and maintaining accurate records and fully documenting funding information that is readily available for examination, Corrosion Office officials cannot ensure that they accurately account for and report the TCC program costs in the annual budget report to Congress. DOD’s Corrosion Office has established two goals for the TCC program, and has a process in place to monitor the results of the program. According to the 2014 DOD Corrosion Prevention and Mitigation Strategic Plan, TCC has the following goals: (1) develop individuals with education, training, and experience who will form the future core of the technical community within DOD and private industry that specializes in work on corrosion prevention or control; and (2) produce solutions (i.e., knowledge, technologies, processes, and materials) that tangibly reduce the effect of corrosion on DOD infrastructure and weapon systems. To address its goal of developing individuals through education, training, and experience, the Corrosion Office monitors TCC projects that include involving students in corrosion research. The TCC program provides students with the opportunity to pursue advanced education that will form the future core of the technical community within DOD and private industry that specializes in work on corrosion prevention or control. Corrosion Office officials track results and have cited the number of students and research papers that have been produced as a result of receiving TCC funds. The Corrosion Office cited these results as success stories. According to the Corrosion Office, as of January 2014, the TCC program has funded 64 graduate students, and 63 undergraduate students. In addition, TCC funding has resulted in 52 research articles. (App. III provides additional details of the number of graduates and research articles, by TCC participant). Corrosion Office officials stated that it is difficult to measure the success of research and purposely did not set target numbers for students or research papers because sheer numbers would not show the full extent of the benefits received from the number of students educated or the research papers published. We acknowledge that it can be difficult to measure the success of research. For example, we previously found that evaluating the effectiveness of research programs can be difficult and noted challenges, such as research results may take a long time and research may not achieve its intended results but can lead to unexpected discoveries that provide potentially more-interesting and valuable results. The Corrosion Office has established a research goal for producing solutions that tangibly reduce the effect of corrosion on DOD infrastructure and weapon systems; however, the office has not established a process for transitioning any results of the demonstrated research projects to the military departments. DOD Instruction 5000.67, which implements Section 2228 of Title 10 of the United States Code, establishes policy, assigns responsibilities, and provides guidance for corrosion prevention and control within DOD. The instruction requires the Corrosion Office to develop a long-term strategy for corrosion prevention and mitigation that, among other things, provides for a coordinated research and development program that includes the transition of new corrosion-prevention technologies to military departments. In addition, federal internal control standards state that agencies should establish procedures and mechanisms that enforce management’s directives, such as the process of adhering to requirements, which in this case is the requirement to transition TCC results to the military departments. The Corrosion Office has a process to monitor that the contractual agreements of the TCC research projects are being accomplished. Specifically, according to Corrosion Office officials, the Corrosion Office, among other things, periodically tracks the status of the TCC projects. However, the office’s ultimate goal, officials stated, is to transition results of the demonstrated TCC projects, when possible, to the military departments. Corrosion Office officials defined success as the production of products or knowledge that can be used by the military departments as they develop and implement corrosion-control technologies within their services. For example, officials cited one ongoing project as an example of a success story: the project has identified important information about a technique of using fasteners to accelerate corrosion during outdoor exposures.Accelerated testing is an approach that expedites the corrosion of material or its properties and will allow officials to obtain more information from a given test time than would normally be possible. According to Corrosion Office officials, this project will provide information that the military departments can use as they design and conduct their future tests. However, the Corrosion Office does not have a process for how it will transition the results of this project to the military departments in accordance with the Section 2228 of Title 10 of the United States Code and DOD Instruction 5000.67. The military departments’ Corrosion Executives, who are assigned to be the principal points of contact on corrosion issues, stated that none of the results from TCC projects have transitioned to the military departments. While there are no specific examples of TCC program results that have transitioned to Air Force operational systems, the Air Force’s Corrosion Executive stated there are cases where the results of TCC projects have revealed areas that the Air Force needs to further review, such as the effects of corrosion on structural integrity. A spokesman for the Army’s Corrosion Executive stated that the Army is unaware of any TCC project that has been incorporated into any specific military system or that has specifically affected the Army’s corrosion-prevention and control performance. The Navy’s Corrosion Executive stated it is anticipated that at the conclusion of TCC projects, military research labs will continue development of any resulting technologies (to support future platform demonstration, validation, and implementation). However, the Navy does not expect that the technology from TCC’s efforts will be transitioned directly to the Navy’s use but rather to the Technology Product Development phase of technology evolution. Further, the Navy considers knowledge and technical expertise to be the key outputs of TCC efforts, and sees the development of knowledge and technology as long-term efforts. Thus, although the Navy expects tangible benefits from TCC, the Navy believes that it may be too early to visualize potential benefits. Corrosion Office officials stated that it is difficult to transition results of the TCC projects to the military departments because outputs of TCC research are in the early stages of technology evolution and thus are not mature enough to be used by the military departments. Therefore, Corrosion Office officials acknowledged the need to establish a process to transition TCC results to the military departments. Until the Corrosion Office establishes a process to study and determine what, if any, TCC results could transition to the military departments, DOD will not be able to demonstrate the success of the TCC program and the extent to which TCC results are helping to prevent or mitigate corrosion. To help reduce the billions of dollars in annual costs from the effects of corrosion on DOD’s infrastructure and military equipment, the department’s Corrosion Office has been collaborating with universities and military research labs on research for solutions and to educate personnel about corrosion. The Corrosion Office has provided an overview of its management process, including minimum requirements for selecting TCC projects, and uses the Broad Agency Announcement process to select some TCC projects; however, officials have not fully documented some key procedures for selecting and approving projects for funding. Documenting this information would be consistent with Standards for Internal Control in the Federal Government, which states that all transactions and other significant events, such as the procedures for managing a program, need to be clearly documented. Without fully documenting its decision-making procedures for selecting and approving projects, the Corrosion Office cannot demonstrate how projects were selected and approved for the TCC program. Internal control standards also state that agencies should clearly document transactions and documentation should be readily available for examination. Section 2228 of Title 10 of the United States Code also requires that DOD annually report the amount of funds used for the TCC program to Congress. We determined that the Corrosion Office did not maintain accurate records, or have supporting documents readily available for examination. Without tracking and maintaining accurate records and fully documenting funding information that is readily available for examination, Corrosion Office officials cannot ensure that they accurately account for and report the TCC program costs in the annual budget report to Congress. DOD is continuing to support millions of dollars worth of corrosion-related research at universities and labs in anticipation of eventually transitioning the results of projects to benefit the military departments. The Corrosion Office has established a TCC goal to produce solutions that will tangibly reduce the effect of corrosion on DOD systems. However, DOD’s Corrosion Office has not established a process for transitioning TCC program results to benefit the military departments, which is required by Section 2228. Without the establishment of a process for transitioning results to the military departments, DOD will not be able to further demonstrate the success of the TCC program and the extent to which TCC results are helping to prevent or mitigate corrosion. We are making five recommendations to help ensure that DOD strengthens the management of the TCC program. To enhance DOD’s ability to make consistent and informed decisions in its management of the TCC program in accordance with internal control standards, we recommend that the Under Secretary of Defense for Acquisition, Technology and Logistics require the Director, Corrosion Policy and Oversight Office, to document the procedures for approving projects within the TCC program for civilian institutions; document the procedures for selecting and approving projects within the TCC program for military academic institutions; document the procedures for selecting and approving military research labs supporting civilian and military institutions in conducting projects within the TCC program; and track and maintain accurate records that include amounts of funds used for the TCC program, and have them readily available for examination to ensure that funding data will be accurately accounted for and reported in future reports, such as the annual budget report to Congress. To better ensure that DOD can demonstrate the success of the TCC program and the extent to which TCC results will help to prevent or mitigate corrosion, we recommend that the Under Secretary of Defense for Acquisition, Technology and Logistics require the Director, Corrosion Policy and Oversight Office, to establish a process for transitioning demonstrated results of TCC projects to the military departments as required by the Section 2228 of Title 10 of the United States Code. We provided a draft of this report to DOD for comment. In its written comments, which are reprinted in appendix IV, DOD partially concurred with two of our recommendations and did not concur with three recommendations. DOD partially concurred with our third recommendation that the Director, Corrosion Policy and Oversight Office, document the procedures for selecting and approving military research labs supporting civilian and military institutions in conducting projects within the TCC program. DOD stated that the DOD Corrosion Prevention and Mitigation Strategic Plan adequately documents the procedure for selecting and approving military research labs that support projects conducted by civilian and military institutions within the TCC program, but agreed to add additional details to its documentation. DOD also stated that the strategic plan notes that it will fund projects based on available budget, and funding will be provided to both military research labs and universities. However, we do not agree that this information represents documentation for selecting and approving military research labs, as we have recommended. As we noted in our report, we found that the 2014 strategic plan and TCC Definitions Document provide some information about the requirements and factors for selecting projects, but the documents do not mention the steps taken by decision makers to select and approve the military research labs. Although DOD’s response agreed to add details to its 2014 strategic plan, it did not specify what type of information will be added. Thus, we maintain that DOD could enhance its oversight of corrosion projects by documenting how it selects and then approves military research labs supporting civilian and military institutions. Additionally, documenting these procedures would help ensure that the Corrosion Office’s leaders consistently follow procedures for selecting and approving labs that support the institutions within the TCC program. DOD partially concurred with our fourth recommendation that the Director, Corrosion Policy and Oversight Office, track and maintain accurate records that include amounts of funds used for the TCC program, and have them readily available for examination to ensure the funding data will be accurately accounted for and reported in future reports, such as the annual budget report to Congress. DOD stated that GAO was provided a complete and accurate set of financial records during the course of this engagement, but DOD acknowledged, in its comments and during the review, that there was initially some inconsistency in financial reporting. DOD cited the following reasons for inconsistent financial reporting: (1) some projects funded early in the program, under the University Corrosion Collaboration program, would not be considered under the current TCC program; and (2) in 2013, Congress required the Corrosion Office to call out funding for research opportunities separately from activity requirements and project opportunities. Further, in its response, DOD stated that it has now implemented internal controls to identify and document budget categories for each financial transaction executed, which it says will improve timeliness of reporting. In effect, this would meet the intent of our recommendation, if implemented. However, the reasons that DOD cited above, which we also noted in our report, do not negate the need for DOD to track and maintain accurate funding information. We maintain that DOD should track and maintain accurate records that include amounts of funds used for the TCC program, and have them readily available for examination to ensure the funding data will be accurately accounted for and reported in future reports, such as the annual budget report to Congress. DOD did not concur with our first and second recommendations that the Director, Corrosion Policy and Oversight Office, document the procedures for approving projects for civilian institutions, and for selecting and approving projects for military academic institutions. In its response, DOD stated that the process is adequately documented in the DOD Corrosion Prevention and Mitigation Strategic Plan and TCC Definitions Document. DOD noted that the plan and definitions document (1) provide five primary and six secondary project-selection requirements, and (2) state that the Corrosion Office will convene and chair the project-selection panel. Additionally, DOD noted that it did not make a distinction in the documents regarding the type of institution (civilian or military) because the requirements are applicable across the TCC program. We agree and noted in our report that the TCC Definitions Document and its 2014 strategic plan include requirements (i.e., primary requirements) and other factors to consider (i.e., secondary requirements) when selecting some TCC projects. Although DOD states these requirements in its definitions document and strategic plan, it has not documented how it applies these requirements to approve projects for civilian institutions, and to select and approve projects for military academic institutions. The selection of projects is partially documented for civilian institutions (i.e., a panel convenes). However, during our discussions with officials, they acknowledged that a panel was not involved in the procedures for selecting and approving military academic institutions. Instead, the Deputy Director makes selection and approval decisions, but these procedures are not documented. We maintain that DOD could enhance its oversight of corrosion projects by documenting how it approves projects for civilian institutions and selects and approves TCC projects for military academic institutions. Additionally, documenting these procedures would help ensure that the Corrosion Office’s leaders consistently follow procedures for approving projects for the civilian institutions, and for selecting and approving projects for military academic institutions. DOD did not concur with our fifth recommendation that the Director, Corrosion Policy and Oversight Office, establish a process for transitioning demonstrated results of TCC projects to the military departments as required by Section 2228 of Title 10 of the United States Code. In its response, DOD stated that the process for transitioning demonstrated results of TCC projects to the military departments is appropriately developed and documented in the DOD Corrosion Prevention and Mitigation Strategic Plan and the TCC Definitions Document. DOD also stated that the TCC program is specifically designed to improve the probability of technology transition by ensuring early and close collaboration between the research institutions and the military department laboratories. Additionally, DOD stated that the DOD Corrosion Prevention and Mitigation Strategic Plan describes this collaborative effort. Specifically, a figure within the plan illustrates that as the research matures to the “System Development/Prototype Demonstration” phase, military department personnel resume the primary role in transitioning the technology to their respective departments with the goal being implementation of the technology. We noted in our report that DOD Instruction 5000.67, which implements Section 2228 of Title 10 of the United States Code, requires the Corrosion Office to develop a long-term strategy for corrosion prevention and mitigation that, among other things, provides for a coordinated research and development program that includes the transition of new corrosion-prevention technologies to the military departments. However, we did not identify a process for transitioning project results to the military departments in DOD documents, such as its strategic plan, which states that the project results should transition to the military departments. Further, we also found that the figure referenced does not illustrate a process for how the Corrosion Office transitions project results to the military departments but shows, as Corrosion Office officials stated, the collaborative efforts of the parties involved in the TCC program. We also noted in our report that Corrosion Office officials stated that it is difficult to transition results of the TCC projects to the military departments because outputs of TCC research are in the early stages of technology evolution and thus are not mature enough to be used by the military departments. Therefore, Corrosion Office officials acknowledged the need to establish a process to transition TCC results to the military departments. Furthermore, military departments’ Corrosion Executives, who are assigned to be the principal points of contact on corrosion issues, stated that none of the results from TCC projects have transitioned to the military departments. We maintain that the Corrosion Office should establish a process for transitioning demonstrated results of TCC projects to the military departments to allow the office to demonstrate the success of the TCC program and the extent to which the program results will help prevent or mitigate corrosion. We are sending copies of this report to appropriate congressional committees; the Secretary of Defense; the Secretaries of the Army, Navy, and Air Force and the Commandant of the Marine Corps; the Director of the DOD Office of Corrosion Policy and Oversight; and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov/. If you or your staff have any questions about this report, please contact me at (202) 512-5257 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made contributions to this report are listed in appendix V. To determine the extent to which the Department of Defense (DOD) has developed procedures for managing the Technical Corrosion Collaboration (TCC) program, we reviewed DOD’s guidance—the 2014 DOD Corrosion Prevention and Mitigation Strategic Plan. We also reviewed the TCC Definitions Document to identify DOD’s procedures for selecting and approving TCC projects. We compared DOD’s procedures for managing the TCC program with criteria in federal standards for internal control. We obtained information from universities participating in the TCC program regarding projects funded by the Office of Corrosion Policy and Oversight (hereafter referred to as the Corrosion Office) for fiscal years 2008 through 2013. We selected a nongeneralizable sample of projects for further review. Specifically, we chose seven projects conducted by the five universities that received the most funding from the Corrosion Office. We determined that funding data from the Corrosion Office were sufficiently reliable for selecting a nongeneralizable sample of universities and projects for further review. The projects we reviewed were research projects that included examples of university project managers working with students to test corrosion of materials in different environments. We did not review the universities’ and other entities’ management of the corrosion projects. We used a semistructured interview tool to obtain information from project managers at the selected universities to further understand the Corrosion Office’s procedures and their implementation, and to identify successes and challenges, if any. We requested and reviewed project-related documents, such as white papers, formal project proposals, purchase requests, cooperative agreements, grants, and contracts to determine how projects were selected, approved, and funded. We also interviewed officials from the Corrosion Office, as well as representatives from each of the military departments, to understand how the procedures were implemented. To determine the extent to which DOD can provide information on the amount of funds it spent on the TCC program, we reviewed financial records such as documents that show funds the Corrosion Office provided to the universities and military research labs, and Military Interdepartmental Purchase Requests. Office’s funding data with the purchase requests for fiscal years 2009 through 2013 to identify any differences. We also examined Section 2228 of Title 10 of the United States Code, which requires the Corrosion Office to submit an annual corrosion budget report that includes funds used for the TCC program. We further interviewed Corrosion Office officials to discuss the amount of funds DOD spent on TCC projects. Although we determined that data from the Corrosion Office were sufficiently reliable for selecting a nongeneralizable sample of universities and projects for further review, we found some funding data discrepancies and documentation issues, which we discuss in this report and make recommendations for corrective action. The Military Interdepartmental Purchase Request is a form used by a DOD requesting agency, such as the U.S. Air Force Academy, to place an order for, among other things, services, such as conducting research, with entities, including military academic institutions. any successes to date cited by DOD. We also examined Section 2228 of Title 10 of the United States Code, which requires the Secretary of Defense to develop and implement a long-term strategy that includes a plan to transition new corrosion-prevention technologies to military departments. We reviewed status reports obtained from Corrosion Office officials. We also attended the 2013 Annual TCC Review to obtain information on the status of projects from TCC participants, including researchers at universities and military research labs. We further interviewed corrosion-program officials to discuss the status of DOD’s efforts to transition project results to military departments. We visited or contacted the following offices during our review. Unless otherwise specified, these organizations are located in or near Washington, D.C. Office of Corrosion Policy and Oversight Air Force Corrosion Control and Prevention Executive U.S. Air Force Academy, Colorado Air Force Institute of Technology, Wright-Patterson Air Force Base, Army Corrosion Control and Prevention Executive Army Construction Engineering Research Laboratory, Champaign, Army Research Laboratory, Aberdeen Proving Ground, Maryland Navy Corrosion Control and Prevention Executive U.S. Naval Academy Naval Postgraduate School, Monterey, California Navy Research Laboratory University of Virginia, Charlottesville, Virginia University of Akron, Akron, Ohio University of Southern Mississippi, Hattiesburg, Mississippi Ohio State University, Columbus, Ohio University of Hawaii, Honolulu, Hawaii SAFE, Inc., Monument, Colorado We conducted this performance audit from April 2013 to May 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. University of Southern Mississippi North Dakota State University SAFE, Inc. In addition to the contact named above, Carleen Bennett, Assistant Director; DuEwa Kamara; Gustavo Crosetto; Elizabeth Curda; Mark Dowling; Melissa Emrey-Arras; Dawn Godfrey; Lisa McMillen; Madhav Panwar; Richard Powelson; Terry Richardson; George Scott; Ryan Siegel; John Van Schaik; and Angela Watson made contributions to this report. Defense Infrastructure: DOD’s 2013 Facilities Corrosion Study Addressed Reporting Elements.GAO-14-337R. Washington, D.C.: March 27, 2014. Defense Management: DOD Should Enhance Oversight of Equipment- Related Corrosion Projects. GAO-13-661. Washington, D.C.: September 9, 2013. Defense Infrastructure: DOD Should Improve Reporting and Communication on Its Corrosion Prevention and Control Activities. GAO-13-270. Washington, D.C.: May 31, 2013. Defense Management: Additional Information Needed to Improve Military Departments’ Corrosion Prevention Strategies. GAO-13-379. Washington, D.C.: May 16, 2013. Defense Management: The Department of Defense’s Annual Corrosion Budget Report Does Not Include Some Required Information. GAO-12-823R. Washington, D.C.: September 10, 2012. Defense Management: The Department of Defense’s Fiscal Year 2012 Corrosion Prevention and Control Budget Request. GAO-11-490R. Washington, D.C.: April 13, 2011. Opportunities to Reduce Potential Duplication in Government Programs, Save Tax Dollars, and Enhance Revenue. GAO-11-318SP. Washington, D.C.: March 1, 2011. Defense Management: DOD Needs to Monitor and Assess Corrective Actions Resulting from Its Corrosion Study of the F-35 Joint Strike Fighter. GAO-11-171R. Washington, D.C.: December 16, 2010. Defense Management: DOD Has a Rigorous Process to Select Corrosion Prevention Projects, but Would Benefit from Clearer Guidance and Validation of Returns on Investment. GAO-11-84. Washington, D.C.: December 8, 2010. Defense Management: Observations on Department of Defense and Military Service Fiscal Year 2011 Requirements for Corrosion Prevention and Control. GAO-10-608R. Washington, D.C.: April 15, 2010. Defense Management: Observations on the Department of Defense’s Fiscal Year 2011 Budget Request for Corrosion Prevention and Control. GAO-10-607R. Washington, D.C.: April 15, 2010. Defense Management: Observations on DOD’s Fiscal Year 2010 Budget Request for Corrosion Prevention and Control. GAO-09-732R. Washington, D.C.: June 1, 2009. Defense Management: Observations on DOD’s Analysis of Options for Improving Corrosion Prevention and Control through Earlier Planning in the Requirements and Acquisition Processes. GAO-09-694R. Washington, D.C.: May 29, 2009. Defense Management: Observations on DOD’s FY 2009 Budget Request for Corrosion Prevention and Control. GAO-08-663R. Washington, D.C.: April 15, 2008. Defense Management: High-Level Leadership Commitment and Actions Are Needed to Address Corrosion Issues. GAO-07-618. Washington, D.C.: April 30, 2007. Defense Management: Additional Measures to Reduce Corrosion of Prepositioned Military Assets Could Achieve Cost Savings. GAO-06-709. Washington, D.C.: June 14, 2006. Defense Management: Opportunities Exist to Improve Implementation of DOD’s Long-Term Corrosion Strategy. GAO-04-640. Washington, D.C.: June 23, 2004. Defense Management: Opportunities to Reduce Corrosion Costs and Increase Readiness. GAO-03-753. Washington, D.C.: July 7, 2003. Defense Infrastructure: Changes in Funding Priorities and Strategic Planning Needed to Improve the Condition of Military Facilities. GAO-03-274. Washington, D.C.: February 19, 2003.
According to DOD, corrosion can significantly affect maintenance cost, service life of equipment, and military readiness by diminishing the operations of critical systems and creating safety hazards. Pursuant to Section 2228 of Title 10 of the U.S. Code, DOD's Corrosion Office is responsible for prevention and mitigation of corrosion of military equipment and infrastructure. To help identify technology to prevent or mitigate corrosion and educate personnel about corrosion prevention and control, DOD funds universities and military labs in the TCC program. GAO was asked to review DOD's TCC program and its goals. In this report, GAO addressed the extent to which DOD (1) has established procedures for managing the TCC program, (2) can provide information on the amount of funds spent on the program to date, and (3) has established goals for the TCC program and transitioned demonstrated results from projects to military departments. GAO reviewed DOD policies and plans and met with DOD corrosion officials and TCC participants. The Department of Defense's (DOD) Office of Corrosion Policy and Oversight (Corrosion Office) has documented some, but not all, key procedures for the Technical Corrosion Collaboration (TCC) program. For civilian institutions, the Corrosion Office documented procedures for selecting projects, but has not done so for approving these projects. In addition, for military academic institutions, the office has not documented procedures for selecting and approving projects. Corrosion Office officials stated that procedures for some aspects of the TCC program are not documented because the program is still evolving and they would like flexibility to enable innovation in determining how to manage the program. However, without fully documenting its decision-making procedures for selecting and approving projects, the Corrosion Office cannot demonstrate how projects were selected and approved for the TCC program. Corrosion Office officials provided data on the amount of funds spent on the TCC program for fiscal years 2008 through 2013, but in some cases the data were not readily available and were inconsistent for the same time frame. As a result, it is unclear what the Corrosion Office has spent on the TCC program. Section 2228 requires the Corrosion Office to include a description of the amount of funds used for the TCC program in its annual corrosion budget report to Congress. However, because the Corrosion Office does not track and maintain accurate records, it is unable to determine the amount of funds spent. In the absence of fully documented funding data that are readily available for examination, Corrosion Office officials cannot ensure that they will accurately account for and report TCC costs in the annual budget report to Congress. DOD has set goals for the TCC program, but has not developed a process to transition demonstrated results from projects to military departments. According to the DOD Corrosion Prevention and Mitigation Strategic Plan , TCC program goals are to: (1) develop individuals with education, training, and experience who will form the future core of the technical community within DOD and private industry; and (2) produce solutions that will reduce the effect of corrosion on DOD infrastructure and weapon systems. To track the goal of developing people, the Corrosion Office cited, among other things, the research papers that have been produced as a result of the TCC program. Section 2228 requires that the Corrosion Office coordinate a research and development program that includes a plan for the transition of new corrosion-prevention technologies to the military departments. To track the goal to produce solutions that will reduce corrosion, the Corrosion Office monitors TCC projects' results; however, the office has not established a process to transition demonstrated results of the research projects to the military departments. Corrosion Office officials stated that it is difficult to transition results because outputs of TCC research are in the early stages of technology evolution and thus are not mature enough to be used by the military departments. Therefore, Corrosion Office officials acknowledge the need to establish a process to transition TCC results to the military departments. Until the Corrosion Office establishes a process to study and determine what, if any, TCC results could transition to the military departments, DOD will not be able to demonstrate the success of the TCC program and the extent to which TCC results are helping to prevent or mitigate corrosion. GAO recommends five actions to improve DOD's management of the TCC program. DOD partially agreed with two actions: to document procedures to select and approve labs, and to track and maintain accurate funding data. DOD did not agree with three recommendations to document procedures to select and approve projects, and to establish a process to transition project results to the military departments. GAO believes that these recommendations remain valid.
The federal government’s civilian workforce faces large losses over the next several years, primarily through retirements. Expected retirements in the SES, which generally represents the most senior and experienced segment of the workforce, are expected to be even higher than the governmentwide rates. In our January 2003 report, we estimated that more than half of the government’s 6,100 career SES members on board as of October 2000 will have left the service by October 2007. Estimates for SES attrition at 24 large agencies showed substantial variations in both the proportion that would be leaving and the effect of those losses on the gender, racial, and ethnic profile. We estimated that most of these agencies would lose at least half of their corps. The key source of replacements for the SES—the GS-15 and GS-14 workforce—also showed significant attrition governmentwide and at the 24 large agencies by fiscal year 2007. While this workforce is generally younger, and those who leave do so for somewhat different reasons than SES members, we estimate that almost half, 47 percent, of the GS-15s on board as of October 2000 will have left federal employment by October 2007 and about a third, 34 percent, of the GS-14s will have left. While past appointment trends may not continue, they do present a window into how the future might look. In developing our estimates of future diversity of the SES corps, we analyzed appointment trends for the federal government and at 24 large agencies to determine the gender, racial, and ethnic representation of the SES corps in 2007 if appointment trends that took place from fiscal years 1995 through 2000 continued. We found that, governmentwide, the only significant change in diversity by 2007 would be an increase in the number of white women, from 19.1 to 23.1 percent, and a corresponding decrease in white men, from 67.1 to 62.1 percent. The proportion of the SES represented by minorities would change very little, from 13.8 to 14.5 percent. Table 1 presents the results by gender, racial, and ethnic groups of our simulation of SES attrition and projection of SES appointments using recent trends. The table also shows that the racial and ethnic profile of those current SES members who will remain in the service through the 7- year period will be about the same as it was for all SES members in October 2000. This is because minorities are projected to be leaving at essentially the same rate overall as white members. Thus, any change in minority representation will be the result of new appointments to the SES. However, as the last columns of table 1 show, if recent appointment trends continue, the result of replacing over half of the SES will be a corps whose racial and ethnic profile changes very little. The outlook regarding gender diversity is somewhat different—while the percentage represented by SES white women is estimated to increase by 4 percentage points, the percentage of minority women is estimated to increase by .5 percentage point—from 4.5 to 5.0 percent. While white men are estimated to decrease by 5 percentage points, minority men are estimated to increase by .2 percentage point, from 9.3 to 9.5 percent. The results of our simulation of SES attrition and our projection of appointments to the SES over the 7-year period showed variation across the 24 Chief Financial Officers (CFO) Act agencies, as illustrated in table 2. However, as with the governmentwide numbers, agencies tend to increase the proportion of women in the SES, particularly white women, and decrease the proportion of white men. The proportion represented by minorities tended to change relatively little. Our estimates of SES attrition at individual agencies by gender, racial, and ethnic groups are likely to be less precise than for our overall SES estimates because of the smaller numbers involved. Nevertheless, the agency-specific numbers should be indicative of what agency profiles would look like on October 1, 2007, if recent appointment trends continue. The gender, racial, and ethnic profiles of the career SES at the 24 CFO Act agencies varied significantly on October 1, 2000. The representation of women ranged from 13.7 percent to 36.1 percent with half of the agencies having 27 percent or fewer women. For minority representation, rates varied even more and ranged from 3.1 percent to 35.6 percent with half of the agencies having less than 15 percent minorities in the SES. Our projection of what the SES would look like if recent appointment trends continued through October 1, 2007, showed variation, with 12 agencies having increased minority representation and 10 having less. While projected changes for women are often appreciable, with 16 agencies having gains of 4 percentage points or more and no decreases, projected minority representation changes in the SES at most of the CFO Act agencies are small, exceeding a 2 percentage point increase at only 6 agencies. At most agencies, the diversity picture for GS-15s and GS-14s is somewhat better than that for the SES. To ascertain what the gender, racial, and ethnic profile of the candidate pool for SES replacements would look like, we performed the same simulations and projections for GS-15s and GS-14s as we did for the SES. Over 80 percent of career SES appointments of federal employees come from the ranks of GS-15s. Similarly, over 90 percent of those promoted to GS-15 are from the GS-14 workforce. Table 3 presents the results of our analysis for GS-15s, and table 4 presents the results for GS-14s. The results show a somewhat lower proportion of this workforce will leave. Minority representation among those GS-15s who remain by 2007 will be about the same as it was at the beginning of fiscal year 2001, indicating that whites and minorities will leave at about the same rates. However, the proportion of minority GS-14s would increase somewhat (by 1.5 percentage points) and the proportion of both grades represented by white and minority women will also increase. Moreover, if recent promotion trends to GS-15 and GS-14 continue, marginal gains by almost all of the racial and ethnic groups would result. Our simulation shows that significant numbers of current minority GS-15s and GS-14s will be employed through fiscal year 2007, and coupled with our projection of promotions, shows there will be substantial numbers of minorities at both the GS-15 (8,957) and GS-14 (15,672) levels, meaning that a sufficient number of minority candidates for appointment to the SES should be available. With respect to gender, the percentage of white women at GS-15 is projected to increase by 2.6 percentage points to 22 percent and at GS-14 by 0.9 percentage point to 23.5 percent. The proportions of minority women will increase by 0.9 percentage point to 6.5 percent for GS-15s and 0.5 percentage point to 8.1 percent for GS-14s, while those for minority men will increase 0.8 percentage point to 10.8 percent for GS-15s and 0.5 percentage point to 10.7 percent for GS-14s. At 60.6 percent, white men will represent 4.2 percentage points less of GS-15s and, at 57.5 percent, 2.1 percentage points less of GS-14s than in fiscal year 2001. Again, our estimates for the GS-15 and GS-14 populations at individual agencies are likely to be less precise than our governmentwide figures because of the smaller numbers involved but should be indicative of what agency profiles would look like in October 2007. During fiscal years 2001 through 2007, the wave of near-term retirements and normal attrition for other reasons presents the federal government with the challenge and opportunity to replace over half of its career SES corps. The response to this challenge and opportunity will have enormous implications for the government’s ability to transform itself to carry out its current and future responsibilities rather than simply to recreate the existing organizational structures. With respect to the challenge, the federal government and governments around the world are faced with losses that have a direct impact on leadership continuity, institutional knowledge, and expertise. Focusing on succession planning, especially at the senior levels, and developing strategies that will help ensure that the SES corps reflects diversity will be important. We have gained insights about selected succession planning and management practices used by other countries that may be instrumental for U.S. agencies as they adopt succession planning and management strategies. We found that leading organizations engage in broad, integrated succession planning and management efforts that focus on strengthening both current and future organizational capacity. As part of this approach, these organizations identify, develop, and select their people to ensure an ongoing supply of successors who are the right people, with the right skills, at the right time for leadership and other key positions. Succession planning is also tied to the federal government’s opportunity to change the diversity of the SES corps through new appointments. Leading organizations recognize that diversity can be an organizational strength that contributes to achieving results. By incorporating diversity program activities and objectives into agency succession planning, agencies can help ensure that the SES corps is staffed with the best and brightest talent available regardless of gender, race, or ethnicity. As stated earlier, the succession pool of candidates from the GS-15 and GS-14 levels should have significant numbers of minority candidates to fill new appointments to the SES. It will be important to identify and nurture talent from this workforce and other levels in agencies early in their careers. Development programs that identify and prepare individuals for increased leadership and managerial responsibilities will be critical in allowing these individuals to successfully compete for admission to the candidate pool for the next level in the organization. Succession planning and management is starting to receive increased attention from the Office of Management and Budget (OMB) and OPM, and we have also seen a positive response from these leadership agencies in developing and implementing programs that promote diversity. In commenting on our January 2003 report, OPM concurred with our findings on SES attrition and diversity and said it welcomed the attention the report brings to a critical opportunity facing the federal workforce and federal hiring officials. The Director said that increasing diversity in the executive ranks continues to be a top priority for OPM and that the agency has been proactive in its efforts to help federal agencies obtain and retain a diverse workforce, particularly in the senior ranks.Both OPM and EEOC said that our analysis was an accurate reflection of the likely future composition of the career SES if recent patterns of selection and attrition continue. EEOC expressed concern about the trends suggested by our analyses to the extent that they may point to the presence of arbitrary barriers that limit qualified members of any group from advancing into the SES. EEOC also stated that in the years ahead, federal agencies will need to continue their vigilance in ensuring a level playing field for all federal workers and should explore proactive strategies, such as succession planning and SES development and mentoring programs for midlevel employees, to ensure a diverse group of highly qualified candidates for SES positions. Other federal agencies told us that they also have leadership development programs in place or are establishing agencywide human capital planning and executive succession programs, which include diversity as an element. They also told us that holding executives accountable for building a diverse workforce was an element in their performance evaluation for agency executives. Continued leadership from these agencies, coupled with a strong commitment from agency management, will go a long way toward ensuring the diversity of senior leadership. Chairwoman Davis and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to answer any questions you may have. For further information, please contact George H. Stalcup on (202) 512- 9490 or at [email protected]. Individuals making key contributions to this testimony include Steven Berke, Anthony Lofaro, Belva Martin, and Walter Reed. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The federal government faces large losses in its Senior Executive Service (SES), primarily through retirement but also because of other normal attrition. This presents the government with substantial challenges to ensuring an able management cadre and also provides opportunities to affect the composition of the SES. In a January 2003 report, GAO-03-34 , GAO estimated the number of SES members who would actually leave service through fiscal year 2007 and reviewed the implications for diversity, as defined by gender, race, and ethnicity of the estimated losses. Specifically, GAO estimated by gender, race, and ethnicity the number of members of the career SES who will leave government service from October 1, 2000, through September 30, 2007, and what the profile of the SES will be if appointment trends do not change. GAO made the same estimates for the pool of GS-15s and GS-14s, from whose ranks the vast majority of replacements for departing SES members come, to ascertain the likely composition of that pool. More than half of the 6,100 career SES members employed on October 1, 2000, will have left service by October 1, 2007. Using recent SES appointment trends, the only significant changes in diversity would be an increase in the number of white women and an essentially equal decrease in white men. The percentage of GS-15s and GS-14s projected to leave would be lower (47 percent and 34 percent, respectively), and we project that the number of minorities still in the GS-15 and GS-14 workforce would provide agencies sufficient opportunity to select minority members for the SES. Estimates showed substantial variation in the proportion of SES minorities leaving between 24 large agencies and in the effect on those agencies' gender, racial, and ethnic profiles. Minority representation at 10 agencies would decrease and at 12 would increase. Agencies have an opportunity to affect SES replacement trends by developing succession strategies that help achieve a diverse workforce. Along with constructive agency leadership, these strategies could generate a pool of well-prepared women and minorities to boost the diversity of the SES ranks.
ICE has not developed and implemented a process to identify and analyze program risks since assuming responsibility for SEVP in 2003, making it difficult for ICE to determine the potential security and fraud risks across the more than 10,000 SEVP-certified schools and to identify actions that could help mitigate these risks. SEVP and CTCEU officials expressed concerns about the security and fraud risks posed by schools that do not comply with program requirements. Furthermore, various cases of school fraud have demonstrated vulnerabilities in the management and oversight of SEVP-certified schools. We reported that SEVP faces two primary challenges to identifying and assessing risks posed by schools: (1) it does not evaluate program data on prior and suspected instances of school fraud and noncompliance, and (2) it does not obtain and assess information from CTCEU and ICE field office school investigations and outreach events. Evaluating SEVP information on prior and suspected cases of school noncompliance and fraud. SEVP does not have a process to evaluate prior and suspected cases of school fraud and noncompliance to identify lessons learned from such cases, which could help it better identify and assess program risks. SEVP has maintained a compliance case log since 2005—a list of approximately 172 schools (as of December 2011) that officials have determined to be potentially noncompliant with program requirements. The compliance case log represents those schools that SEVP, on the basis of leads and out-of- cycle reviews, is monitoring for potential noncompliance. According to SEVP officials, it has not used this list to identify and evaluate possible trends in schools’ noncompliance, although this list could provide useful insights to SEVP to assess programwide risks. Further, SEVP officials said that they have not looked across previous cases of school fraud and school withdrawals to identify lessons learned on program vulnerabilities and opportunities to strengthen internal controls. Our analysis indicates that there are patterns in the noncompliant schools, such as the type of school. For example, of the 172 postsecondary institutions on SEVP’s December 2011 compliance case log, about 83 percent (or 142) offer language, religious, or flight studies, with language schools representing the highest proportion. Without an evaluation of prior and suspected cases of school fraud and noncompliance, ICE is not well positioned to identify and apply lessons learned from prior school fraud cases, which could help it identify and mitigate program risks going forward. Obtaining information from CTCEU and ICE field offices’ investigations and outreach efforts. Based on our interviews with SEVP’s Director and other senior officials, we reported that SEVP had not established a process to obtain lessons learned information from CTCEU’s criminal investigators. Investigators may have valuable knowledge from working cases of school fraud in identifying and assessing program risks, including information such as characteristics of schools that commit fraud, how school officials exploited weaknesses in the school certification process, and what actions ICE could take to strengthen internal controls. For example, according to investigators in one ICE field office, CTCEU was hampered in pursuing a criminal investigation because SEVP officials did not obtain a signed attestation statement within the I-17 application from a school official stating that the official agreed to comply with rules and regulations. Another risk area we reported on is designated school officials’ access to SEVIS. In 2011, CTCEU provided SEVP officials with a position paper expressing concerns that designated school officials, who are not required to undergo security background checks, are responsible for maintaining updated information on foreign students in SEVIS. Investigators at three of the eight field offices we interviewed said that SEVP allowed designated school officials to maintain SEVIS access and the ability to modify records in the system while being the subject of an ongoing criminal investigation, despite requests from CTCEU to terminate SEVIS access for these officials. In addition, CTCEU collects data on its outreach efforts with schools through its Campus Sentinel program; however, the SEVP Director stated that his office had not obtained and analyzed reports on the results of these visits. CTCEU initiated Campus Sentinel in 2011, which ICE operates across all of its field offices nationwide.conducted 314 outreach visits to schools. According to CTCEU investigators, these visits provide an opportunity to identify potential risks, including whether schools have the capacity and resources to support From October 1, 2011, through March 6, 2012, CTCEU programs for foreign students. Obtaining information on lessons learned from CTCEU investigators could help provide SEVP with additional insights on such issues as characteristics of schools that have committed fraud and the nature of those schools’ fraudulent activities. To address these issues, we recommended that ICE develop and implement a process to identify and assess risks in SEVP, including evaluating prior and suspected cases of school noncompliance and fraud to identify potential trends, and obtaining and assessing information from CTCEU and ICE field office investigative and outreach efforts. DHS concurred and stated that ICE will develop and implement such a process by later this year. ICE has not consistently implemented existing internal control procedures for SEVP in four areas: (1) initial verification of evidence submitted in lieu of accreditation, (2) recordkeeping to ensure schools’ continued eligibility, (3) ongoing compliance monitoring of school licensing and accreditation status, and (4) certification of schools offering flight training. Regulations require schools to establish that they are legitimate and meet other eligibility criteria for their programs to obtain certification from ICE. In addition, weaknesses in managing and sharing key information with CTCEU impede SEVP’s prevention and detection of school fraud. The following summarizes these key findings and recommendations we made to address these issues. Initial verification of evidence submitted in lieu of accreditation. ICE requires schools to present evidence demonstrating that the school is legitimate and is an established institution of learning or other recognized place of study, among other things. Non-accredited, post-secondary schools, in particular, must provide “in lieu of” letters, which are evidence provided by petitioning schools in lieu of accreditation by a Department of Education-recognized accrediting agency. ICE policy and guidance require that SEVP adjudicators render an approval or denial of schools’ petitions based on such evidence and supporting documentation. This includes verifying that schools’ claims in the Form I-17, such as accreditation status and “in lieu of” letters, are accurate. However, SEVP adjudicators have not verified all “in lieu of” letters submitted to ICE by the approximately 1,250 non-accredited, post-secondary schools, as required by ICE’s policy. Rather, adjudicators decide whether to verify a letter’s source and the signatory authority of the signee based on any suspicions of the letters’ validity. Investigators at one of the eight ICE field offices we interviewed stated SEVP officials certified at least one illegitimate school—Tri-Valley University in California—because the program had not verified the evidence provided in the initial petition. In March 2012, CTCEU officials stated that several of their ongoing investigations involve schools that provided fraudulent evidence of accreditation or evidence in lieu of accreditation to ICE. Consistent verification of these letters could help ICE ensure that schools are legitimate and detect potential fraud early in the certification process. We recommended that ICE consistently implement procedures for ensuring schools’ eligibility, including consistently verifying “in lieu of” letters. DHS agreed and stated that SEVP personnel have initiated mandatory verification of all “in lieu of” letters. Recordkeeping to ensure continued eligibility of schools. ICE’s standard operating procedures for recordkeeping require SEVP officials to maintain records to document ongoing compliance. We reported that ICE had not consistently maintained certain evidence of selected schools’ eligibility for the program. According to our review of a stratified random sample of 50 SEVP-certified school case files, 30 files did not contain at least one piece of evidence required by the program’s policies and procedures. In addition, ICE was unable to produce two schools’ case files that we requested as part of our randomly selected sample. Without the schools’ information and evidence contained in these case files, including attestation statements and site visit reports, ICE does not have an institutional record to provide reasonable assurance that these schools were initially and continue to be legitimate and eligible for certification. According to ICE officials, the school recertification process would help address issues with incomplete and missing school files because schools are required to resubmit all evidence required by regulation when going through recertification. The Border Security Act required recertification for all SEVP-certified schools by May 2004 and every 2 years thereafter. However, ICE began the first recertification cycle in May 2010 and did not recertify all schools during this 2-year cycle, which ended in May 2012. As of March 31, 2012, ICE reported to have recertified 1,870 schools (approximately 19 percent of SEVP-certified schools). Given the delays in completing recertification, ICE is not positioned to address gaps in SEVP’s case files and cannot provide reasonable assurance that schools that were initially certified to accept foreign students are still compliant with SEVP regulations. Thus, we recommended that ICE establish a process to identify and address all missing school case files, including obtaining required documentation for schools whose case files are missing evidence. DHS concurred and stated that SEVP plans to work with ICE Records Management to develop protocols and actions to strengthen records management. Ongoing compliance monitoring of school licensing and accreditation status. ICE does not have a process to monitor the ongoing eligibility of licensed and accredited, non-language schools enrolling foreign students. ICE regulations require all certified schools to maintain state licensing (or exemption) and provide various forms of evidence to ICE supporting schools’ legitimacy and eligibility. If a school loses its state license, the school would be unable to operate legally as a school within that state. However, ICE does not have controls to ensure that SEVP compliance unit officials would be aware of this issue; therefore, a school without a proper business license may remain certified to enroll foreign students and its designated school officials may continue to access SEVIS. We recommended that ICE develop and implement a process to monitor state licensing and accreditation status of all SEVP- certified schools. DHS concurred and stated that SEVP personnel are developing procedures to ensure frequent validation of license or accreditation information. Certification of schools offering flight training. ICE’s policies and procedures require flight schools to have Federal Aviation Administration (FAA) Part 141 or 142 certification to be eligible for SEVP certification; however, ICE has certified schools offering flight training without such FAA certifications. As the federal agency responsible for regulating safety of civil aviation in the United States, FAA administers pilot certification (licensing) and conducts safety oversight of pilot training. FAA’s regulations for pilot training and certification are found in three parts— Parts 61, 141, and 142. ICE established a policy that requires Part 141 and 142 for eligibility in SEVP because FAA directly oversees these flight schools and training centers on an ongoing basis. We reported identifying 434 SEVP-certified schools that, as of December 2011, offer flight training to foreign students. However, 167 (38 percent) of these flight training providers do not have FAA Part 141 or 142 certification. SEVP senior officials acknowledged that all SEVP-certified schools offering flight training do not have FAA Part 141 or 142 certification even though the program requires it. ICE indicated that in most of the cases, it may have initially certified flight schools with Part 141 or 142 certification but the schools allowed their FAA certification to expire, and ICE did not identify or take compliance action against them. ICE is taking actions to address noncompliant flight schools as of May 2012, including notifying all SEVP-certified schools that do not have the required FAA certification that they must re-obtain the certification. Moreover, SEVP officials stated that they plan to coordinate with FAA to determine which schools have not met the requirements and will take withdrawal actions against them. While these are positive steps, we reported that SEVP had not yet established target time frames for implementing and completing these planned actions. Because ICE has certified or maintained certification of schools that provide flight training without the required FAA certification and oversight, the program is vulnerable to security and fraud risks. Thus, we recommended that ICE establish target time frames for notifying SEVP-certified flight schools that do not have the required FAA certification that they must re-obtain FAA certification. DHS concurred and stated that SEVP is consulting with FAA to develop target time frames. Coordination among SEVP, CTCEU, and ICE field offices. ICE has not consistently followed the standard operating procedures that govern the communication and coordination process among SEVP, CTCEU, and ICE field offices. Specifically, these procedures delineate roles and responsibilities for criminal investigations and establish protocols for SEVP taking administrative actions against schools during and following a criminal investigation. In some instances, SEVP management has not followed CTCEU requests to take or cease administrative actions and has not referred potentially criminal cases to CTCEU in accordance with ICE’s procedures. By strengthening coordination and communication between SEVP and CTCEU, ICE could better ensure that SEVP, CTCEU, and ICE field offices understand which information to share regarding whether to take administrative actions during criminal investigations and that clear criteria exist for referring cases from CTCEU based upon potentially criminal behavior. Thus, we recommended that ICE revise its standard operating procedure to specify which information to share among stakeholders during criminal investigations. DHS concurred and stated that SEVP will work with CTCEU and ICE field personnel to make the necessary revisions. We also recommended that ICE establish criteria for referring cases of a potentially criminal nature from SEVP to CTCEU. ICE agreed and stated that SEVP will work with CTCEU to improve this process. Chairman Schumer, Ranking Member Cornyn, and members of the subcommittee, this concludes my prepared statement. I would be pleased to answer any questions that you may have at this time. For further information regarding this testimony, please contact Rebecca Gambler at (202) 512-8777, or [email protected]. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony are Kathryn Bernet, Assistant Director; Frances Cook; Elizabeth Dunn; Anthony C. Fernandez; David Greyer; and, Lara Miklozek. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony discusses the findings from our June 2012 report assessing U.S. Immigration and Customs Enforcement’s (ICE) oversight of the Student and Exchange Visitor Program (SEVP). ICE, within the Department of Homeland Security (DHS), is responsible for managing SEVP, including ensuring that foreign students studying in the United States comply with the terms of their admission into the country. ICE also certifies schools as authorized to accept foreign students in academic and vocational programs. As of January 2012, more than 850,000 active foreign students were enrolled at over 10,000 certified schools in the United States. In addition, ICE manages the Student and Exchange Visitor Information System (SEVIS), which assists the agency in tracking and monitoring certified schools, as well as approved students. We reported in April 2011 on the need for close monitoring and oversight of foreign students, and that some schools have attempted to exploit the immigration system by knowingly reporting that foreign students were fulfilling their visa requirements when they were not attending school or attending intermittently. Schools interested in accepting foreign students on F and M visas must petition for SEVP certification by submitting a Form I-17 to ICE. Once this certification is achieved, schools issue Forms I-20 for students, which enable them to apply for nonimmigrant student status. The Border Security Act requires DHS to confirm, every 2 years, SEVP-certified schools’ continued eligibility and compliance with the program’s requirements. During the initial petition and recertification processes, a school must provide ICE with evidence of its legitimacy and its eligibility, such as designated school officials’ attestation statements that both the school and officials intend to comply with program rules and regulations. This testimony summarizes the key findings of our report on ICE’s management of SEVP, which was publicly released last week. Like that report, this statement will address ICE’s efforts to (1) identify and assess risks in SEVP, and (2) develop and implement procedures to prevent and detect fraud during the initial certification process and once schools begin accepting foreign students. In summary, we reported that ICE does not have a process to identify and assess risks posed by schools in SEVP. Specifically, SEVP (1) does not evaluate program data on prior and suspected instances of school fraud and noncompliance, and (2) does not obtain and assess information from CTCEU and ICE field office school investigations and outreach events. Moreover, weaknesses in ICE’s monitoring and oversight of SEVP-certified schools contribute to security and fraud vulnerabilities. For example, ICE has not consistently implemented internal control procedures for SEVP in the initial verification of evidence submitted in lieu of accreditation. In addition, ICE has not consistently followed the standard operating procedures that govern the communication and coordination process among SEVP, CTCEU, and ICE field offices. We recommended that ICE, among other things, identify and assess program risks; consistently implement procedures for ensuring schools’ eligibility; and, revise its standard operating procedure to specify which information to share among stakeholders during criminal investigations. ICE concurred with all the recommendations we made to address these challenges and has actions planned or under way to address them.
The complexity of the environment in which CMS operates the Medicare program cannot be overstated. CMS manages Medicare, the nation’s largest health insurer, in a challenging and complex environment in which medical providers and beneficiaries form a vast network of stakeholders with differing priorities. The agency is charged with developing regulations and policies that implement the statutory provisions of the Medicare program. The program is operated by CMS with the assistance of approximately 50 carriers and fiscal intermediaries—generally health insurance companies—that annually process about 900 million claims submitted by nearly 1 million providers and private health plans. Medicare is estimated to have spent nearly $240 billion in fiscal year 2001 for services provided to approximately 40 million elderly and disabled beneficiaries. In order to receive reimbursement from Medicare, CMS requires physicians to submit claims that identify the services they have performed by using the agency’s national uniform procedure coding system. Like other Medicare providers, physicians are responsible for billing Medicare correctly for services performed and informing beneficiaries of the level of Medicare coverage at the time of service. To do this they need reliable information on Medicare coverage, claims coding and documentation requirements, claims submission instructions, program changes, and carrier policies. CMS communicates information describing its billing requirements, as well as other relevant regulations and policies, to physicians primarily through its carriers. The carriers communicate with physicians in several ways. They send physicians bulletins periodically to update them on new rules and program changes, provide toll-free lines to call centers so physicians can obtain answers to questions, and maintain Web sites that include postings of, among other things, new Medicare developments and carrier-sponsored training. CMS and its carriers also sponsor a variety of provider education activities, such as workshops and on-line training courses, to help familiarize physicians with billing rules and other aspects of the program and to update them on program changes. Physicians have become increasingly vocal about the timeliness and quality of the Medicare information CMS and its carriers provide. For example, last year, in congressional testimony, physicians and their representatives reported frustration because carrier communications are often unclear and do not always provide them with advance notice of program changes. They also charged that, when they seek clarification, carrier personnel often give them incorrect answers to their questions. CMS establishes carrier requirements, including some related to communications, in its annual budget and performance requirements (BPR). For example, the BPRs require carriers to communicate with physicians about local medical review policies (LMRP) and claims submission procedures. CMS is responsible for monitoring the performance of its carriers to ensure that they accurately and efficiently fulfill their requirements and properly implement Medicare policies. Much of CMS’s oversight is accomplished through its periodic evaluations of carrier performance. In addition, the agency also requires carriers to routinely submit evidence of their own self-monitoring activities. Medicare information provided by carriers for physicians is often difficult to interpret and use, out of date, inaccurate, and incomplete. Our analysis of the three main methods that carriers use to communicate information to physicians—printed bulletins, provider assistance call centers, and Web sites—revealed problems with all three types of communication. Carrier bulletins contain important information for physicians but present this information in formats that may be difficult for them to use. In addition, critical information, including changing program requirements, may be late in reaching physicians who need to take steps to implement these changes. CMS relies heavily on carrier bulletins—which each carrier is required to issue at least quarterly—to give physicians official notice of their responsibilities and requirements under Medicare law, regulations, and guidelines. Carriers have discretion regarding the bulletins’ format and organization, but they are required to reprint certain CMS-provided information verbatim. For example, carriers receive and reproduce CMS- issued guidance—known as program memorandums (PM)—which convey details about upcoming program changes scheduled to become effective in the next few months. Our review of bulletins issued from March through July 2001 by 10 randomly selected carriers showed that there are several aspects of the bulletins, including their organization and length, which hinder their usefulness. As a result of carriers’ freedom to develop their own bulletins with little direct CMS guidance, there was considerable variation in the organization and format of the bulletins we reviewed. While bulletins issued by 6 of the 10 carriers organized information by subject matter or specialty, the others provided only an alphabetical key word index instead of a table of contents to assist the user. Providing only a key word index makes it difficult to identify information relevant to different physician practices. Some carriers that serve physicians in several states issued a single bulletin for all their states. Some of these bulletins had information for each state contained in a separate insert or section. Other, less helpful, multistate bulletins only noted state differences within individual articles, requiring physicians or their staffs to scan each article to determine whether it was relevant and applicable to their practices. In addition, the bulletins were typically over 50 pages in length and several exceeded 80 pages, making them lengthy documents to search. In several instances, bulletins were late, or provided little advance notice, in communicating HCFA-issued program changes to physicians. To test the timeliness of carrier bulletins in communicating information, we selected four PMs that HCFA issued from February through April 2001 concerning program changes that physicians would need to be aware of in billing for certain services. We then reviewed the bulletins issued from March through July by the 10 carriers we sampled, to determine when the four PMs were included in the carriers’ bulletins. In 11 instances, PMs were either not communicated through carriers’ bulletins until after their scheduled implementation dates, or they did not appear at all in the bulletins we reviewed, as shown in table 1. In 11 additional instances, bulletins communicated the memorandums less than 30 days prior to the implementation date, giving physicians little advance notice to help ensure their compliance with Medicare rules. Overall, 6 of the 10 carriers did not communicate at least one of the four PMs before its scheduled implementation. Customer service representatives (CSR) at carrier call centers we tested rarely provided appropriate answers to questions we posed. Eighty-five percent of the responses we received from CSRs from 5 carrier call centers were inaccurate or incomplete. To assess the accuracy of responses provided by CSRs, we made 61 calls to the provider inquiry lines at call centers and asked three questions from the FAQ pages on carriers’ Web sites concerning the appropriate way to bill Medicare in circumstances commonly encountered by physicians.When calling, we identified ourselves as GAO representatives and asked the CSRs to answer our questions as if we were physicians. CSR responses were recorded verbatim and submitted to a Medicare coding expert at CMS along with the text of the questions and answers used. We used the following questions when making our calls: 1. If a physician provides critical care for 1 hour and 15 minutes, how should the services be reported? Should code 99292 (for an additional 30 minutes) be reported? Should the reduced services modifier be used? 2. What is the proper way to bill for an office visit on the same day as a surgical procedure? 3. Can code 99211 be reported if a nurse in the physician’s office provides instruction on self-administering insulin? Appendix II provides the answers that appear on the Web sites. The results of the test, which were validated by the coding expert, showed that 32 percent of the answers were inaccurate, 53 percent were incomplete, and only 15 percent were complete and accurate. These results are illustrated in table 2. There was little variation among the carriers in the overall accuracy and completeness of their answers. Many physicians we spoke to expressed frustration that CSRs will not always provide information on how to properly code certain claims. Carrier call centers had varying policies about providing physicians with specific coding information. Knowing the appropriate code for a medical service is essential to properly billing Medicare. Although CMS does not have a policy preventing them from doing so, managers at the carrier call centers we visited reported that it is not their policy to provide information to callers on how to code a specific claim. Carriers reported that they are reluctant to provide specific codes because the CSRs lack the medical expertise to appropriately make coding judgments, and they do not have the physician’s clinical documentation at the time of the calls to understand the procedure or service in context. During our test of call center accuracy, we noted that CSRs followed different procedures regarding coding-related inquiries and frequently did not adhere to the carriers’ stated policy. While in 19 cases the CSRs provided neither a code nor referral to a source of coding information, specific codes were given in 24 instances. Specific referral to a bulletin issue or to a regulation number was given in 16 other cases, but for 7 of these cases the information was too vague to enable someone to locate the coding rules. Even when the referrals to information sources were accurate, physicians told us that being directed to other carrier publications does not respond to their need for readily accessible interpretation of Medicare regulations. Our visits to 3 call centers also revealed that there is no uniformity or standardization across carriers in the types of technological resources available to CSRs. For example, 1 call center we visited had an on-line searchable database of LMRPs that facilitated quick retrieval of the appropriate information by the CSRs. Representatives at the 2 other call centers used hard copy bulletins or bulletins posted on their Web sites in a nonsearchable format. CSRs without easily searchable tools told us that they relied heavily on their more experienced colleagues, in the absence of more authoritative sources, for answers. The lack of technological resources at call centers can affect centers’ abilities to monitor the performance of their CSRs. One call center we visited was able to record calls from providers and the computer screens accessed by CSRs to determine whether their responses were accurate and complete, while the other two call centers could only record the telephone calls. Two call centers we visited were able to electronically observe each CSR’s phone line activity to track the length and origin of calls; however, another call center had no electronic information and could only monitor lines and identify the type of caller by listening to the calls as they took place. Most of the 10 carrier Web sites we reviewed did not contain features that would allow physicians to quickly and directly obtain the information they needed. The Web sites frequently lacked logical organization and navigation tools and search functions that increase a site’s usability and value. Only 4 of the 10 Web sites we examined contained site maps. Only 6 contained search functions and in two instances, the search functions did not work. Three sites had neither search functions nor site maps, making them difficult to navigate to access information. Furthermore, the Web sites often contained out-of-date information. Nine of the 10 sites included the required schedule of upcoming workshops or seminars but 5 of these sites were out of date. Only 1 site contained a potentially useful “What’s New” page, but the page contained a single document of regulations that went into effect 8 months prior to the date of our Web site review. Although HCFA’s 2001 BPRs contain specific requirements for carrier Web sites, most of the sites we reviewed did not meet all of these standards. Only 2 of the 10 sites complied with all 11 of the BPRs’ content requirements, as shown in table 3. In addition, other requirements, such as a federally mandated privacy statement outlining the type of information the site collects on visitors and a section containing FAQs were not consistently met. Five Web sites contained the privacy statement, and 5 contained a link to FAQs. Although CMS has set standards for carrier Web sites, each carrier independently develops its own Web site. This has resulted in duplication of effort and variations in the usability and complexity of the information provided. CMS is ultimately responsible for managing and overseeing carrier performance to ensure that carriers supply physicians with consistent and accurate information. However, the agency’s standards and technical assistance to guide carriers in physician communications activities are not sufficient to produce consistent, high-quality products and effective communication strategies. The lack of standard approaches to communication by carriers makes consistent oversight more challenging for CMS. Neither of the two principal oversight tools used by CMS— contractor performance evaluations (CPE) and carrier self-monitoring and reporting—provide enough information to reveal problems carriers may have in providing quality communications. CMS has established few standards to guide carriers’ primary communication activities, including publishing bulletins, providing telephone assistance to callers, and establishing and maintaining Web sites. The BPRs only require carriers to issue bulletins at least quarterly. There is no substantive guidance regarding content or readability. Carrier call centers are instructed to perform “quality monitoring” no more than 10 times a quarter for each CSR, but CMS’s definition of what constitutes accuracy and completeness in call center responses is neither clear nor specific. For example, CMS defines accuracy as not being inaccurate—as opposed to providing necessary and complete information to allow physicians to correctly bill the program. In the case of Web-based communication, the BPRs contain few requirements about the clarity or timeliness of information. Instead, they generally focus on legal issues— such as measures to protect copyrighted material—that, while important, do not enhance physicians’ understanding of, or ability to correctly implement, Medicare policy. CMS officials acknowledged that physician communications have received less support and oversight than other aspects of carrier operations and attributed this, in part, to a lack of resources. CMS’s regional offices, which are most directly responsible for carrier oversight, provide assistance to carriers through business function experts (BFE) whose principal method of oversight is participation on CPE teams. A CMS official told us that there are not enough BFEs to provide direct technical assistance to all carriers in all areas of communication. Furthermore, a lack of budgetary resources limits BFEs’ travel to carrier sites. One regional BFE we interviewed handles four functional areas, including provider education and provider phone inquiries, for 6 separate Medicare carriers. The BFE interviewed noted that little hands-on technical assistance is provided. Despite the fact that bulletins are a key means of physician communication, and Web sites are growing in importance, some regions have not been allocated any BFEs for these functions. Moreover, no region has a full-time equivalent staff member dedicated to these critical forms of communication, leaving carriers to solve problems independently. CMS’s efforts to assist carriers in sharing successful approaches are also limited. The agency’s annual conference for call center managers provides a forum for sharing information and strategies. However, similar opportunities do not exist for carrier staff members working with bulletins and Web sites. CMS collects and posts on-line a carrier BestPractices Handbookrelating to provider communications and education, but as of January 2002, the information had not been updated in a year. Further, the handbook contains little detail about how to implement the strategies for improving communications. The lack of specific standards, sufficient technical assistance, and best practice guidance creates an environment in which, as one CMS business function expert said, each carrier must develop its own communication strategies, resulting in duplication of carriers’ efforts and variations in the quality of their service to physicians. At the time of our review, CMS did not have any efforts that would be implemented in the near future to develop more standardized carrier communications to physicians. HCFA has not traditionally undertaken comprehensive evaluations of the quality or usefulness of carriers’ bulletins or Web sites. For 21 years, the agency has conducted on-site evaluations to directly monitor carriers’ performance in a variety of areas. However, the agency is just beginning to focus CPEs on provider communications. In 2001, it expanded the focus of its call center CPEs to include call centers that serve providers, including physicians. Previously, these reviews had been limited to beneficiary call centers. We observed one CPE team as it evaluated the operations of a provider call center. This team focused mainly on performance standards that address procedures, such as how long a caller is kept on hold or whether the CSR had given an appropriate greeting, rather than whether information provided was complete and accurate. In order to evaluate the carrier’s performance in monitoring its CSRs, the CPE auditor listened to 10 prerecorded calls that had been evaluated by the carrier at an earlier date. However, the CPE auditor did not access the claims information to evaluate whether the information being provided to the callers was correct. While assessing procedural performance is important, helping ensure that callers receive the correct information is essential. In addition to CMS’s evaluation of call centers through CPEs, the agency requires carriers to evaluate the performance of their call center CSRs. Carriers must monitor up to 10 calls for each CSR each quarter— amounting to about 90 of the more than 30,000 provider inquiries received by a given carrier each quarter. Carriers we visited agreed with one call center industry expert that this level of monitoring is far short of what is necessary to thoroughly evaluate quality. Accuracy and completeness are a relatively small component (40 percent of the total score) in the overall performance evaluation of a CSR. The remaining components focus on CSR attitude and helpfulness. CMS’s oversight beyond the CPE process and carrier self-monitoring consists principally of CMS staff reviewing carriers’ self-reported data, with little direct feedback from the regional BFEs. Carriers submit monthly reports summarizing certain call center data, such as how long callers were kept on hold and the number of calls abandoned. They also submit quarterly activity reports on communications. The reports include items such as the number of provider training sessions offered and the questions most frequently asked by providers. Feedback from CMS is geared toward correcting specific problems, such as lengthy caller waiting times, rather than identifying ways to improve performance on a broader scale. Through the feedback it has received from the physician community, CMS is aware of a need to improve Medicare communications. It is working to issue new Medicare rules and regulations on a more consistent and predictable schedule, expand information resources available to physicians, and obtain more physician feedback relating to Medicare policies and communications. However, most of these efforts are in early stages of planning or implementation; therefore, we could not assess their ultimate impact. In June 2001, CMS announced plans to reduce the burden on providers of frequent and irregularly occurring Medicare program changes by issuing and communicating regulations on a more consistent schedule. CMS plans to institute a new, Web-based quarterly compendium of program changes, including all regulations that it expects to publish in the coming quarter, as well as references or electronic links to regulations published in the previous quarter. By doing so, CMS hopes to make physicians aware of program changes and provide them with sufficient lead time to implement them. The compendium was originally to be introduced in October 2001, but according to a CMS official, as of January 2002 the compendium’s format was still being developed. CMS is attempting to improve the consistency of information that carriers provide to physicians and has both short-term and long-term projects under way. Currently, the agency is establishing a new on-line training program for carrier call center CSRs, and over the past year it has provided in-person training to carrier staffs. Installation of satellite dish technology at Medicare carriers was recently completed so that CMS could broadcast training to carrier staffs. In addition to these shorter-term initiatives, agency officials told us that they are developing some longer- term projects to enhance carriers’ communications. For example, they are developing a standard template for carrier bulletins. In 2001, CMS also awarded a contract for the design of a standardized computer system that would be used by CSRs at all carrier call centers to improve CSRs’ access to information as they respond to telephone inquiries. A CMS official told us this will be tested first at a durable medical equipment contractor this spring, but had no estimate of when it would be installed at carrier sites. CMS is also addressing information that it provides directly to the physician community. In November 2001, CMS mailed the physician edition of Medicare and You 2002to physicians participating in Medicare, which was the first issuance of a physician-oriented version of their annual Medicare andYoubeneficiary handbook. This physician information includes a summary of recent Medicare program changes, an overview of physician concerns that CMS is currently addressing, and guidance on contacting carriers or CMS for claims submission and billing information. The agency is also focusing on improving its national Web site. Plans include installation of a new navigational system to make information on CMS’s Web site more accessible and consolidation of all information relevant to providers in a single Web-based source—a project that will take several years to complete. In recent years, CMS has also increased efforts to obtain feedback from physicians regarding communications and training. In response to the physician community’s concerns, the agency established the Physicians’ Regulatory Issues Team (PRIT) in 1998. PRIT has collaborated with the physician community to identify Medicare requirements, procedures, and communications that cause the most problems for physicians, and is working to address the most significant of them. In July 2001, the administrator of CMS announced the formation of “open door” policy committees, including one focused on physicians, consisting of top CMS staff members and provider group representatives that would meet regularly to discuss regulations that are troubling to providers. Finally, in the fall of 2001, CMS sent out two surveys to obtain the views of physicians and other providers on their Medicare education needs and their experiences with CMS’s program integrity efforts. The scope and complexity of the Medicare program make complete, accurate, and timely communication of program information vital to physicians who need up-to-date knowledge of Medicare requirements in order to serve their patients and bill correctly for the services they provide. Although CMS has delegated this responsibility to carriers, our work demonstrates that physicians cannot rely on carrier bulletins, call centers, or Web sites to meet their information needs. In addition, CMS’s lack of standard requirements for carrier communications results in carriers developing their own approaches to convey information, leading to duplication of effort and varying degrees of timeliness, accuracy, and completeness. CMS has initiated a number of efforts, although some are just getting underway, to improve the way its carriers communicate with physicians and, in doing so, has acknowledged that improvements are needed. However, these efforts focus on the individual methods of communication and do not consider more fundamental matters such as whether the current, and almost complete, reliance on carriers to communicate with physicians is in the best interest of the program. We believe it is important for CMS to initiate a more comprehensive and standardized approach to physician communications through coordination, leadership, and management of CMS’s carrier-based communications. This approach should focus on communicating timely, accurate, and complete information in formats that physicians find easy to use. It should include meaningful performance standards for carrier communications, enhanced requirements for carrier self-monitoring, effective monitoring and feedback by CMS’s staff, and more substantive periodic CPE reviews of carrier communications. In order to improve its assistance to, and oversight of, its Medicare carriers’ physician communications efforts, we recommend that the administrator of CMS adopt a standardized approach that would promote the quality, consistency, and timeliness of Medicare communications while also strengthening CMS’s management and oversight. Specifically we recommend that CMS take the following actions: Assume responsibility for the publication of a national bulletin for physicians, in addition to issuing a quarterly compendium of regulations. Carriers would be responsible for preparing supplements to CMS’s national bulletin regarding local medical policy issues. Establish new performance standards for carrier call centers that emphasize providing complete and accurate answers to physician inquiries. Carriers’ monitoring of their carrier call center operations should also be expanded to assure that these performance standards and policies are followed. Set standards and provide technical assistance to carriers to promote consistency, accuracy, and user-friendliness of all carrier Web sites, which should be limited to local Medicare information and should be designed to link to CMS’s Web site for national program information. Strengthen its contractor evaluation and management process by relying on expert teams to conduct more substantive CPE reviews on all physician communications activities. In written comments on a draft of this report, CMS agreed that improvement is needed in its communications with physicians participating in Medicare and recognized that providing them with the best possible information is integral to successfully serving Medicare beneficiaries. CMS described its current efforts to develop a comprehensive customer service plan and elaborated on several efforts to improve communications that the agency currently has under way. For example, CMS pointed out that it is enhancing its services to physicians by establishing a new program to disseminate information at professional conferences and by instituting its “Open Door Forums” where physicians can meet with CMS officials and share their views on Medicare program rules. We have reprinted CMS’s letter in appendix IV. CMS also provided us with technical comments, which we incorporated as appropriate. In addressing our first recommendation to assume responsibility of a national bulletin for physicians, CMS pointed out that it is taking steps to “nationalize” information contained in these bulletins. It said it is already including articles of national interest regarding Medicare issues in carrier bulletins. CMS also said it is planning a National Provider Bulletin Project to study the practicality of establishing a national source for the information included in these bulletins as well as potential changes to the publication and distribution process. In response to our second recommendation that new performance standards be established for carrier call centers, CMS described a variety of initiatives it has under way to help enhance the quality of these communications. CMS agreed that providing timely, correct, and consistent answers to physicians’ questions is imperative. The agency stated that it has instituted a new program of performance standards that features more effective oversight and evaluation and that includes new quality call monitoring procedures. Although this new plan appears to contain key components of an effective communication strategy, CMS’s description of this effort does not contain sufficient detail for us to fully assess its usefulness. We believe such a plan ultimately needs to incorporate specific performance measures for which the carriers could be held accountable. Although CMS indicated it plans to devise ways of objectively measuring carrier performance, it said that it does not yet have such measures in place. In response to our third recommendation to set standards and provide carriers with additional technical assistance to enhance carrier Web sites, CMS outlined the requirements that carriers must meet. CMS indicated it was satisfied with carriers’ performance in this area, pointing out that an examination of Web sites was part of this year’s annual CPE reviews. According to CMS, none of the carriers have been deficient in their compliance with CMS requirements, and CPE reviewers found most of the Web sites to be user-friendly. Although these CPE reviews may not have detected deficiencies at carrier Web sites, as we have noted most of the Web sites we reviewed did not comply with some of CMS’s requirements. CMS has agreed to reexamine its Web site monitoring efforts. Regarding out fourth recommendation, CMS agreed that utilizing expert teams to conduct CPE reviews would be the best means of ensuring substantive evaluations. However, CMS said that it believed that implementing our recommendation would require the agency to establish a team of dedicated review staff, which would not be feasible given the agency’s available resources. Although CMS said it could not implement our recommendation at this time, it indicated that it will nonetheless try to continue building the expertise of its review staff. According to CMS, many of the staff members that performed these reviews last year will perform them this year as well. In addition, CMS said it will continue to provide relevant training to these staff members. Officials of the American Medical Association and the Medical Group Management Association also reviewed a draft of this report. In oral comments, officials from both organizations said they generally agreed with our findings and recommendations and offered technical comments, which we incorporated as appropriate. We are sending copies of this report to the secretary of Health and Human Services, the administrator of CMS, and other interested parties. We will make copies available to others upon request. If you or your staffs have any questions about this report, please call me at (312) 220-7600. An additional GAO contact and other staff members who made major contributions to this report are listed in appendix V. To develop an understanding of physicians’ concerns about the Medicare communications they receive, we obtained the cooperation of seven physician practices. These practices were of varying sizes, were located in different geographic regions, and were served by three different Medicare carriers. Each practice agreed to send us the Medicare-related information that it received during the 3-month period from February 1 through April 30, 2001. Besides participating in this communications collection effort, representatives from these practices shared their views on the quality of the information they received during this period. We also discussed these matters with representatives from the following 10 professional associations: American Academy of Family Physicians, American Academy of Professional Coders, American College of Emergency Physicians, American College of Physicians-American Society of Internal Medicine, American Health Information Management Association, American Medical Association, Health Care Billing Managers Association, Health Care Compliance Association, Medical Group Management Association, and Professional Association of Health Care Office Managers. Because the majority of Medicare communications to physicians are issued by carriers on behalf of CMS, we focused on the three main methods these carriers use to communicate with physicians—carrier bulletins, carrier provider assistance call centers, and carrier Web sites. We did not review communications from every Medicare carrier. Our findings are limited to the carriers we reviewed and cannot be projected to other carriers. The scope of our work did not permit us to examine provider education efforts such as seminars and training sessions except in the form of documents submitted by physician practices and conversations with agency and carrier officials. In addition to assessing the quality of carrier communications, we also reviewed the agency’s oversight of physician communications and its plans to improve these communications. Finally, we interviewed officials from other agencies within HHS to discuss their communications with physicians participating in the Medicare program. To evaluate the quality of carrier bulletins, we randomly selected 10 carriers and reviewed the bulletins they issued from March through July 2001. We reviewed the bulletins from the standpoint of whether their format and organization facilitated a reader’s ability to locate information. To test the bulletins’ timeliness and completeness in communicating required information, we identified approximately 40 PMs—issued by HCFA from February 1 through April 30, 2001—that addressed program changes relevant to physicians. We then selected four of these memorandums and reviewed the bulletins issued by the sampled carriers to determine when, or whether, the memorandums were published. To evaluate the accuracy and completeness of responses given on carrier- operated provider inquiry lines, we made calls to five call centers operated by 3 carriers for a total of 59 usable responses (two nonresponses were eliminated from the sample). We selected call centers operated by the 3 carriers that serve the geographic areas where the seven physician practices participating in our data collection were located. The three test questions were selected from FAQs posted on carrier Web sites, to represent common physician billing concerns. The questions and answers are listed in appendix II. Our methodology was to ask each of the three questions, four times, at each of the five call centers, for a total of 12 test calls to each center and 20 test calls for each question. Calls were placed at different times of day and different days of the week from early May through June 2001. HCFA officials were aware of our test. Call center managers were also informed that their CSRs would be receiving test calls from us. When calling, we identified ourselves as GAO representatives and asked the CSR to answer our question as if we were physicians. Prompts were only given if the CSR probed for more specific information or gave conditional responses that depended upon different circumstances. In those situations, we asked the CSR to provide the correct answer for each set of circumstances (such as, whether the office visit was related or unrelated to the surgical procedure). Following the response, we asked the CSR if there was any additional information he or she would like to provide. CSR responses were recorded verbatim and submitted to a Medicare coding expert at CMS along with the text of the questions and answers used. The coding expert verified our results using the following criteria. Correct and complete: The answer provided enough information to correctly bill, including (1) a correct explanation of how to apply the billing policy and (2) correct billing codes or a referral to specific documentation that provides coding information. Partial or incomplete: The answer referred to material, but (1) did not provide assistance in interpretation or warn about special circumstances that would affect billing, or (2) provided interpretation but no directions to specific documentation, or (3) was correct but not complete. Incorrect: The answer contained fully or partially incorrect information, such that a physician might incorrectly bill or not file a claim for a billable service. Nonresponse: The CSR refused to answer the question. (Nonresponses occurred because CSRs would not answer questions for callers who were not physicians.) To test the usefulness of carriers’ electronic communications with physicians, we randomly selected 10 carrier Web sites for review. We investigated Web sites to determine whether they were in compliance with the content requirements for electronic media as detailed in HCFA’s 2001 budget and performance requirements and in the contractor Web site standards and guidelines posted on the agency Web site. To identify best practices for effective, user-friendly Web sites, we interviewed four individuals familiar with Web site development, including the Web master for HHS and two private Web designers. We used information from these sources to evaluate the 10 carrier Web sites for their accessibility, privacy, format, content, ease of navigation, organization, contact information, appearance, and use of graphics. We identified HCFA requirements for carrier bulletins, call center operations, and carrier Web sites, and discussed the agency’s oversight and monitoring of carriers’ communications with both headquarters and regional office officials. We researched call center standards used in private industry through conversations with an industry expert and the manager of a large call center, and visited three carrier call centers to discuss technology, standards, best practices, and support from HCFA. We also observed carrier call centers’ monitoring of calls for quality at the three call centers we visited. In addition, we observed a contractor performance evaluation—the agency’s independent review of “at-risk” contractor activities—conducted at one of the carrier call centers in our review. Throughout this review, as we met with HCFA and carrier officials and representatives of the physician practices participating in our communications collection, we solicited their views on problems with the Medicare communications process and potential best practices. Agency officials also identified their current and planned efforts to improve its process for communicating with Medicare providers. In addition, we discussed related issues in our conversations with representatives from professional associations. HHS is the principal federal department responsible for protecting the health of Americans and providing other essential health services. Although the focus of our work was Medicare communications that originated with CMS, we were also asked to identify the quantity and type of communications that physicians receive from other HHS agencies. Based on our review of background information and discussions with HHS officials, we identified nine HHS offices and agencies, other than CMS, as potential sources of information or instructions for practicing physicians. These include the Office of the Secretary, Office of the Inspector General, Agency for Healthcare Research and Quality, Centers for Disease Control and Prevention, Food and Drug Administration, Health Resources and Services Administration, Indian Health Service, National Institutes of Health, and Substance Abuse and Mental Health Services Administration. We contacted officials in these offices and agencies and reviewed information available through their Web sites to determine whether they issued instructions or requirements that affected practicing physicians. Compared to CMS, the other HHS agencies we contacted issue relatively few requirements for practicing physicians and rarely communicate instructions or information directly to the physicians, as does CMS through its Medicare carriers. Generally, officials we contacted indicated that these agencies rely primarily on posting information to their Web sites to communicate with the medical community and the general public. Many of the HHS agencies also offer subject-specific e-mail notification of new Web postings to physicians and others who register to receive this service. Some agencies have newsletters or publications to which physicians and others can subscribe or they provide specific information upon request. The questions and answers we used to test the accuracy of carrier call center responses to physician inquiries are shown in table 4. To identify the quantity and sources of Medicare information received by physicians, we enlisted the assistance of seven physician practices to collect communications that related to their practices and were received during the 3-month period from February 1 through April 30, 2001. A 3- month period was selected so that practices would receive at least one carrier bulletin. HCFA representatives and participating practices reported that the period selected was typical in relation to the release of Medicare regulations and information. The participating physicians represented both urban and rural practices and were located in four states served by three carriers and three HCFA regional offices. They also varied in size and specialty and included a 600-physician multispecialty group; a 450-physician teaching hospital-based group; a 43-physician network of small internal medicine/family practice groups; a 10-physician internal medicine, obstetrics/gynecology, and pediatric a 4-physician multispecialty group; a 4-physician internal medicine group; and a 4-physician ophthalmology group. The practices collected and submitted full copies or excerpts of practice- related communications received by mail, fax, or e-mail, or downloaded from the Internet, regardless of the source, during this period. We asked the practices to omit certain items from their collection due to lack of relevance or privacy issues. Material the practices were asked to include and exclude from their submissions to us is shown in table 5. We collected 947 documents from the physician practices. Based on the table of contents or section titles of these documents, we categorized them as (1) directly related to Medicare, (2) unrelated to Medicare but involving some other requirement relevant to the physician practice, and (3) information relevant to the physician practice that did not include any requirement the practice needed to act upon. We also classified communications by their source, including HCFA or its carriers, other HHS agencies, state and local government agencies, insurance companies and managed care plans, and all other sources, such as professional journals, newsletters, or other information sent to physicians. We could not independently verify that the physician practices submitted all relevant communications they received, nor could we reliably distinguish between communications that the practice requested and those that were unsolicited. Most of the documents submitted by the practices had some Medicare content, indicative of the pervasiveness of the Medicare program. Frequently appearing topics included Medicare fraud and abuse, Medicare coding issues, contractor audits, and the Medicare appeals process. The information that was submitted by the seven physician practices shows that while Medicare-related information accounts for much of this material, a relatively small portion of the documents came from HCFA, its carriers, or other governmental sources. About half of the documents we received from the physician practices contained mostly Medicare information. We found that a relatively small amount of all documents— about 10 percent—was sent by HCFA or its carriers. Material from other HHS agencies accounted for less than 3 percent of all documents the physician practices collected. The majority of the information came from other organizations, such as consulting firms and medical specialty or professional societies. Table 6 shows the source and subject of all documents collected and submitted by the participating physician practices. The number of Medicare-related documents and number of pages submitted by each practice was generally related to the size of the practice. This was true both of documents from HCFA and from the private sector. Three of the smaller practices sent us fewer than 5 documents that they received from HCFA. In one case, the 3 documents submitted by a small practice totaled 217 pages. The largest practice, a multispecialty clinic, sent 57 HCFA documents totaling 704 pages. A small rural practice sent 3 private-source documents totaling 12 pages, while the multispecialty clinic sent 148 documents totaling 1,174 pages. The number of documents received by a practice may be influenced by the practice’s breadth of specialties and participation in professional organizations. Donald Kittler, Victoria Smith, Christi Turner, and Margaret Weber made key contributions to this report.
Unlike other federal programs that make expenditures under the direct control of the government, Medicare constitutes a promise to pay for covered medical services provided to its beneficiaries by about one million providers. Given this open-ended entitlement, it is essential that appropriate and effective rules and policies be specified so that only necessary services are provided and reimbursed. Congress and the Centers for Medicare and Medicaid Services (CMS) have promulgated an extensive body of statutes, regulations, policies, and procedures on what shall be paid for and under what circumstances. Information that carriers give to physicians is often difficult to use, out of date, inaccurate, and incomplete. Medicare bulletins that carriers use to communicate with physicians are often poorly organized and contain dense legal language. Similarly, other means of communicating with physicians, such as toll-free provider assistance lines and websites, have problems with accuracy and completeness. Although all carriers issue bulletins, operate call centers, and maintain websites, each carrier develops its own communications policies and strategies. This approach results in a duplication of effort as well as variations in the quality of carrier communications. CMS provides little technical assistance to help carriers develop effective communication strategies. Neither CMS carrier oversight nor self-monitoring by the carriers is comprehensive enough to provide sufficiently detailed information that could either pinpoint specific communication problems or identify poorly performing carriers. CMS is working to improve its physician communications by consolidating new instructions and regulations and issuing them on a more predictable schedule to lessen the burden of frequent policy changes that physicians cannot anticipate. CMS is also enhancing its education programs for both physicians and carrier staffs and expanding its efforts to obtain physician feedback. Finally, CMS is improving its national website and intends to develop a single web-based source of information for physicians.
Historically, DOD has used its readiness assessment system to assess the ability of units and joint forces to fight and meet the demands of the national security strategy. DOD’s readiness assessment and reporting system is designed to assess and report on military readiness at three levels—(1) the unit level; (2) the joint force level; and (3) the aggregate, or strategic, level. Using information from its readiness assessment system, DOD prepares and sends legislatively mandated Quarterly Readiness Reports to Congress. DRRS is DOD’s new readiness reporting system that is intended to capture information from the previous system, as well as information about organizational capabilities to perform a wider variety of missions and mission essential tasks. DRRS is also intended to capture readiness information from defense agencies and installations, which were not required to report under the previous system. Some DRRS features are currently fielded and being used to varying degrees by the user community. Laws, directives, and guidance, including a DOD directive, Chairman of the Joint Chiefs of Staff Instruction (CJCSI), Secretary of Defense and USD (P&R) memorandums, and service regulations and messages, show that readiness information and data are needed to support a wide range of decision makers. These users of readiness data include Congress, the Secretary of Defense, the Chairman of the Joint Chiefs of Staff, the combatant commanders, the Secretaries of the military departments, and the Chief of the National Guard Bureau. The directives and guidance also list roles and responsibilities for collecting and reporting various types of readiness data. For example, CJCSI 3401.02A assigns the service chiefs responsibility for ensuring required global status of resources and training system (GSORTS) reports are submitted. GSORTS is DOD’s legacy, resource-based readiness reporting system that provides a broad assessment of unit statuses based on units’ abilities to execute the missions for which they were organized or designed as well as the current missions for which they may be employed. The information in the required GSORTS reports includes units’ abilities to execute the missions for which they were organized or designed, as well as the status of their training, personnel, and equipment. In addition, DOD directive 7730.65, which established DRRS as DOD’s new readiness reporting system, assigns the Secretaries of the military departments and the commanders of the combatant commands responsibilities for developing mission essential tasks for all of their assigned missions. Prior to 1999, we identified challenges with DOD’s existing readiness reporting system, GSORTS, and in 1999, Congress directed the Secretary of Defense to establish a comprehensive readiness reporting system. The legislation requires the system to measure in an objective, accurate, and timely manner the capability of the armed forces to carry out (1) the National Security Strategy prescribed by the President, (2) the defense planning guidance provided by the Secretary of Defense, and (3) the National Military Strategy prescribed by the Chairman of the Joint Chiefs of Staff. To address the requirements established by Congress, the Office of the Deputy Under Secretary of Defense (Readiness) began in 2001 to build consensus among DOD’s senior readiness leaders for an improved readiness assessment system. For example, the Deputy’s office distributed a list of key characteristics of the improved readiness assessment system to the leaders in advance of scheduled meetings. The system’s key desired characteristics included allowing near-real-time access to readiness data and trends, enabling rapid, low-cost development using classified Internet technology, and reducing the reporting burdens on people. Since then various directives and memorandums have been issued regarding DRRS responsibilities, requirements, and related issues. For example: On June 3, 2002, the Deputy Secretary of Defense established DOD’s new readiness reporting system, as directed by Congress, by signing DOD Directive 7730.65. According to this directive, DRRS is intended to build upon DOD’s existing processes and readiness assessment tools to establish a capabilities-based, near-real-time readiness reporting system. The DRRS directive assigned USD (P&R) responsibilities for developing, fielding, maintaining, and funding ESORTS (the tool to collect capability, resource, and training information) and overseeing DRRS to ensure accuracy, completeness, and timeliness of its information and data, its responsiveness, and its effective and efficient use of modern practices and technologies. In addition, the USD P&R is responsible for ensuring that ESORTS information, where appropriate, is integrated into DOD’s planning systems and processes. The directive also states that until ESORTS becomes fully operational, the Chairman of the Joint Chiefs of Staff shall maintain the GSORTS database. On June 25, 2004, the Secretary of Defense issued a memorandum, which directed USD (P&R) to develop DRRS to support data requirements identified by the Chairman of the Joint Chiefs of Staff, the combatant commanders, the Secretaries of the Military Departments, and the Chief, National Guard Bureau to include availability, readiness, deployment, and redeployment data. On November 2, 2004, USD (P&R) issued a DRRS interim implementation guidance memorandum. In this memorandum, the undersecretary noted that he had established a DIO to provide reporting assistance for units. The memorandum also stated that combatant commanders would begin reporting readiness by mission essential tasks by November 30, 2004. The memorandum also directed the services to develop detailed implementing guidance for reporting and assessing mission essential task readiness in ESORTS within their respective services, and set a goal for the services to implement the mission essential task reporting process by September 30, 2005. To meet these mission essential task reporting requirements, USD (P&R) directed commanders to rate their organizational capabilities as (1) yes or “Y”, (2) qualified yes or “Q”, or (3) no or “N.” A “Y” indicates that an organization can accomplish the rated tasks or missions to prescribed standards and conditions in a specified environment. It should reflect demonstrated performance in training or operations. A “Q” indicates that performance has not been demonstrated, and, although data may not readily support a “Y,” the commander believes the organization can accomplish the rated task or mission to standard under most conditions. An “N” indicates that an organization cannot accomplish the rated task or mission to prescribed standards in the specified environment at the time of the assessment. The November 2004 memorandum also stated that the expected transition from GSORTS to ESORTS was scheduled to begin in fiscal year 2005. According to the 2004 memorandum, the ESORTS module of DRRS would provide, among other things, visibility of the latest GSORTS information reported by units, and detailed resource information from authoritative data sources with the capability to aggregate or separate the data. This memorandum signaled a change in program direction. Although the 2002 DOD directive stated that DRRS is intended to build upon DOD’s existing processes and readiness assessment tools, the 2004 memorandum indicated that DRRS was to replace GSORTS, as the ESORTS module of DRRS captured both capabilities and resource data. Since its establishment, the DIO has operated within the Office of USD (P&R) and has relied on multiple contractors. To provide governance of DRRS, and enhance communication between the development community, represented by the DIO and contractors, and the user community, which includes the Joint Staff, military services, and combatant commands, USD (P&R) established various bodies with representatives from the user community, including military services, combatant commands, and the defense agencies. Representatives from the Office of USD (P&R) and the Joint Staff currently serve as cochairs of the various bodies. DRRS Battle Staffs comprise colonels, Navy captains, and similar-graded civilians. They track DRRS development and identify issues with the system. At the one- star level, the DRRS General and Flag Officer Steering Committee discusses issues raised by the Battle Staff. In December 2007, USD (P&R) created a committee at the three-star level, referred to as the DRRS Executive Committee. Its charter, finalized about a year later in January 2009, calls for the committee to review and approve proposals and plans to establish policy, processes, and system requirements for DRRS, including approving software development milestones required to reach objectives. To ensure that leadership is provided for the direction, oversight, and execution of DOD’s business transformation efforts, including business systems modernization efforts such as DRRS, DOD relies on several entities. These entities include the Defense Business Systems Management Committee, which is chaired by the Deputy Secretary of Defense and serves as the department’s highest-ranking governance body and the approval authority for business systems modernization activities; the Investment Review Boards, which are chartered by the Principal Staff Assistants—senior leaders from various offices within DOD—and serve as the review and certification bodies for business system investments in their respective areas of responsibility; and the Business Transformation Agency, which is responsible for supporting the Investment Review Boards and for leading and coordinating business transformation efforts across the department. Among other things, the Business Transformation Agency supports the Office of the Under Secretary of Defense, Acquisition, Technology and Logistics in conducting system acquisition risk assessments. Our research and evaluations of information technology programs, including business systems modernization efforts within DOD, have shown that delivering promised system capabilities and benefits on time and within budget largely depends on the extent to which key program management disciplines are employed by an adequately staffed program management office. Among other things, these disciplines include a number of practices associated with effectively developing and managing system requirements, adequately testing system capabilities, and reliably scheduling the work to be performed. They also include proactively managing the program office’s human capital needs, and promoting program office accountability through executive-level program oversight. DOD acquisition policies and guidance, along with other relevant guidance, recognize the importance of these management and oversight disciplines. As we have previously reported, not employing these and other program management disciplines increases the risk that system acquisitions will not perform as intended and require expensive and time- consuming rework. In 2003, we reported that, according to USD (P&R) officials, DRRS was a large endeavor, and that development would be challenging and require buy-in from many users. We also reported that the program was only a concept without detailed plans to guide its development and implementation. Based on the status of the program at that time and DOD’s past record on readiness reporting initiatives, we recommended that the Secretary of Defense direct the Office of USD (P&R) to develop an implementation plan that identified performance goals that are objective, quantifiable, and measurable; performance indicators to measure outcomes; an evaluation plan to compare program results with established goals; milestones to guide DRRS development to the planned 2007 full capability date. DOD did not agree with our recommendation, stating that it had established milestones, cost estimates, functional responsibilities, expected outcomes, and detailed plans for specific information technology requirements and progress indicators. In evaluating the DOD comments, we noted that DOD had established only two milestones—initial capability in 2004 and full capability in 2007—and did not have a road map explaining the steps needed to achieve full capability by 2007. DOD has not effectively managed and overseen the acquisition and deployment of DRRS in accordance with a number of key program management disciplines that are recognized in DOD acquisition policies and guidance, along with other relevant guidance, and are fundamental to delivering a system that performs as intended on time and within budget. In particular, DRRS requirements have not been effectively developed and managed, and DRRS testing has not been adequately performed and managed. Further, DRRS has not been guided by a reliable schedule of the work needed to be performed and the key activities and events that need to occur. These program management weaknesses can be attributed in part to long-standing limitations in program office staffing and oversight. As a result, the program has not lived up to the requirements set for it by Congress, and the department has not received value from the program that is commensurate with the time and money invested—about 7 years and $96.5 million. Each of these weaknesses are summarized below and discussed in detail in appendix II. According to DOD and other relevant guidance, effective requirements development and management includes, among other things, (1) effectively eliciting user needs early and continuously in the system life- cycle process, (2) establishing a stable baseline set of requirements and placing the baseline under configuration management, (3) ensuring that system requirements are traceable backward to higher level business or operational requirements (e.g., concept of operations) and forward to system design documents (e.g., software requirements specification) and test plans, and (4) controlling changes to baseline requirements. However, none of these conditions have been met on DRRS. Specifically, key users have only recently become fully engaged in developing requirements, and requirements have been experiencing considerable change and are not yet stable. Further, different levels of requirements and related test cases have not been aligned with one another, and changes to requirements have not been effectively controlled. As a result, efforts to develop and deliver initial DRRS capabilities have taken longer than envisioned and these capabilities have not lived up to user expectations. These failures increase the risk of future DRRS capabilities not meeting expectations and increase the likelihood that expensive and time-consuming system rework will be necessary. Until recently, key users were not fully or effectively engaged in DRRS requirements development and management. One of the leading practices associated with effective requirements development is engaging system users early and continuously in the process of defining requirements. However, DIO officials and representatives from the military services and the Joint Staff agree that until recently, key users were not effectively engaged in DRRS requirements development and management, although they disagree at to why user involvement has suffered. Regardless, DRRS Executive Committee direction has improved the situation. Specifically, in January 2008, the committee directed the Joint Staff to conduct an analysis of DRRS capabilities, referred to as the “capabilities gap analysis,” which involved the entire readiness community and resulted in 530 additional user requirements. In our view, this analysis is a positive step in addressing long-standing limited involvement by key DRRS users in defining requirements that has contributed to significant delays in the program, as discussed later in the report. As of April 2009, DRRS requirements continued to be in a state of flux. Establishing an authoritative set of baseline requirements prior to system design and development provides a stable basis for designing, developing, and delivering a system that meets its users’ operational needs. However, the fact that these 530 user requirements have recently been identified means that the suite of requirements documentation associated with the system, such as the detailed system requirements, will need to change and thus is not stable. To illustrate, these 530 requirements have not been fully evaluated by the DIO and the DRRS governance boards and according to program officials, have not yet been approved, and thus their impact on the program is not clear. Compounding this instability in the DRRS requirements is the fact that additional changes are envisioned. According to program officials, the changes resulting from the gap analysis and reflected in the latest version of the DRRS Concept of Operations, which was approved by the DRRS Executive Committee in January 2009, have yet to be reflected in other requirements documents, such as the detailed system requirements. Although defining and developing requirements is inherently an iterative process, having a baseline set of requirements that are stable is a prerequisite to effective and efficient system design and development. Without them, the DIO has not been able to deliver a system that meets user needs on time, and it is unlikely that future development and deployment efforts will produce better results. During our review, DIO officials could not demonstrate that requirements and related system design and testing artifacts are properly aligned. One of the leading practices associated with developing and managing requirements is maintaining bidirectional traceability from high-level operational requirements through detailed lower-level requirements and design documents to test cases. We attempted on three separate occasions to verify the traceability of system requirements backwards to higher-level requirements and forward to lower-level software specifications and test cases, and each time we found that traceability did not exist. DIO and contractor officials attributed the absence of adequate requirements traceability to the ongoing instability in requirements and efforts to update program documentation. Without traceable requirements, the DIO does not have a sufficient basis for knowing that the scope of the design, development, and testing efforts will produce a system solution on time and on budget and that will meet users’ operational needs and perform as intended. As a result, the risk is significant that expensive and time- consuming system rework will be required. Since the inception of the program in 2002, DRRS has been developed and managed without a formally documented and approved process for managing changes to system requirements. Adopting a disciplined process for reviewing and accepting changes to an approved baseline set of requirements in light of the estimated costs, benefits, and risk of each proposed change is a recognized best practice. However, requirements management and change-control plans developed in 2006 by the DRRS software development contractor, according to DIO officials, were not adequate. To address this, the Joint Staff developed what it referred to as a conceptual requirements change-control process in February 2008, as a basis for the DIO to develop more detailed plans that could be implemented. In January 2009, the DIO drafted more detailed requirements management and configuration management plans, the latter of which the DIO updated in March 2009. However, the plans have yet to be approved and implemented. Until the DIO effectively controls requirements changes, it increases the risk of needed DRRS capabilities taking longer and costing more to deliver than necessary. According to DOD and other relevant guidance, system testing should be progressive, meaning that it should consist of a series of test events that first focus on the performance of individual system components, then on the performance of integrated system components, followed by system- level tests that focus on whether the system (or major system increments) are acceptable, interoperable with related systems, and operationally suitable to users. For this series of related test events to be conducted effectively, each test event needs to be executed in accordance with well- defined test plans, the results of each test event need to be captured and used to ensure that problems discovered are disclosed and corrected, and all test events need to be governed by a well-defined test management structure. However, the DIO cannot demonstrate that it has adequately tested any of the DRRS increments, referred to as system releases and subreleases, even though it has already acquired and partially deployed a subset of these increments. Moreover, the DIO has yet to establish the test management structures and controls needed to effectively execute DRRS testing going forward. More specifically, the test events for already acquired, as well as currently deployed and operating, DRRS releases and subreleases were not based on well-defined plans. For example, the test plan did not include a schedule of activities to be performed or defined roles and responsibilities for performing them. Also, the test plan did not consistently include test entrance and exit criteria, a test defect management process, and metrics for measuring progress. Further, test events have not been fully executed in accordance with plans, or executed in a verifiable manner, or both. For example, although increments of DRRS functionality have been put into production, the DIO has not performed system integration testing, system acceptance testing, or operational testing on any DRRS release or subrelease. Moreover, the results of all executed test events have not been captured and used to ensure that problems discovered were disclosed to decision makers, and ultimately corrected. For example, the DIO has not captured the test results for at least 20 out of 63 DRRS subreleases. Test results that were captured did not include key elements, such as entrance/exit criteria status and unresolved defects and applicable resolution plans. The DIO has also not established an effective test management structure to include, for example, a clear assignment of test management roles and responsibilities, or a reliable schedule of planned test events. Compounding this absence of test management structures and controls is the fact that the DIO has yet to define how the development and testing to date of a series of system increments (system releases and subreleases) relate to the planned development and testing of the 10 system modules established in January 2009. (See table 1 for a list and description of these modules.) Collectively, this means that it is unlikely, that already developed and deployed DRRS increments can perform as intended and meet user operational needs. Equally doubtful are the chances that the DIO can adequately ensure that yet-to-be developed DRRS increments will meet expectations. The success of any program depends in part on having a reliable schedule that defines, among other things, when work activities will occur, how long they will take, and how they are related to one another. From its inception in 2002 until November 2008, the DIO did not have an integrated master schedule, and thus has long been allowed to proceed without a basis for executing the program and measuring its progress. In fact, the only milestone that we could identify for the program prior to November 2008 was the date that DRRS was to achieve full operational capability, which was originally estimated to occur in fiscal year 2007, but later slipped to fiscal year 2008 and then fiscal year 2011, and is now fiscal year 2014--a 7-year delay. Moreover, the DRRS integrated master schedule that was first developed in November 2008, and was updated in January 2009 and again in April 2009 to address limitations that we identified and shared with the program office, is still not reliable. Specifically, our research has identified nine practices associated with developing and maintaining a reliable schedule. These practices are (1) capturing all key activities, (2) sequencing all key activities, (3) assigning resources to all key activities, (4) integrating all key activities horizontally and vertically, (5) establishing the duration of all key activities, (6) establishing the critical path for all key activities, (7) identifying float between key activities, (8) conducting a schedule risk analysis, and (9) updating the schedule using logic and durations to determine the dates for all key activities. The program’s latest integrated master schedule does not address three of the practices and only partially addresses the remaining six. For example, the schedule does not establish a critical path for all key activities, nor does it include a schedule risk analysis, and it is not being updated using logic and durations to determine the dates for all key activities. In addition, the schedule introduces considerable concurrency across key activities and events for several modules, which introduces increased risk. These limitations in the program’s latest integrated master schedule, coupled with the program’s 7- year slippage to date and continued requirements instability, make it likely that DRRS will incur further delays. The DIO does not currently have adequate staff to fulfill its system acquisition and deployment responsibilities, and it has not managed its staffing needs in an effective manner. Effective human capital management should include an assessment of the core competencies and essential knowledge, skills, and abilities needed to perform key program management functions, an inventory of the program’s existing workforce capabilities, an analysis of the gap between the assessed needs and the existing capabilities, and plans for filling identified gaps. DIO performs a number of fundamental DRRS program management functions, such as acquisition planning, performance management, requirements development and management, test management, contractor tracking and oversight, quality management, and configuration management. To effectively perform such functions, program offices, such as the DIO, need to have not only well-defined policies and procedures and support tools for each of these functions, but also sufficient human capital to implement the processes and use the tools throughout the program’s life cycle. However, the DIO is staffed with only a single full-time government employee—the DIO Director. All other key program office functions are staffed by either contractor staff or staff temporarily detailed, on an as- needed basis, from other DOD organizations. In addition, key positions, such as performance manager and test manager, have either not been established or are vacant. According to DIO and contractor officials, they recognize that additional program management staffing is needed but stated that requests for additional staff had not been approved by USD (P&R) due to competing demands for staffing. Further, they stated that the requests were not based on an assessment of the program’s human capital needs and the gap between these needs and its onboard workforce capabilities. Until DIO adopts a strategic, proactive approach to managing its human capital needs, it is unlikely that it will have an adequate basis for obtaining the people it needs to effectively and efficiently manage DRRS. A key principle for acquiring and deploying system investments is to establish a senior-level governance body to oversee the investment and hold program management accountable for meeting cost, schedule, and performance commitments. Moreover, for investments that are organization wide in scope and introduce new ways of doing business, like DRRS, the membership of this oversight body should represent all stakeholders and have sufficient organizational seniority to commit their respective organizations to any decisions reached. For significant system investments, the department’s acquisition process provides for such executive governance bodies. For example, Major Automated Information Systems, which are investments over certain dollar thresholds or that are designated as special interest because of, among other things, their mission importance, are reviewed at major milestones by a designated milestone decision authority. These authorities are supported by a senior advisory group, known as the Information Technology Acquisition Board, which comprises senior officials from the Joint Staff, the military departments, and staff offices within the Office of the Secretary of Defense. In addition, all business system investments in DOD that involve more than $1 million in obligations are subject to review and approval by a hierarchy of DOD investment review boards that comprise senior DOD leaders, including the Defense Business Systems Management Committee, which is chaired by the Deputy Secretary of Defense. Through these executive oversight bodies and their associated processes, programs are to be, among other things, governed according to corporate needs and priorities, and program offices are to be held accountable for meeting cost, schedule, and performance expectations. Until April 2009, DRRS was not subject to any of DOD’s established mechanisms and processes for overseeing information technology systems. As previously discussed, USD (P&R) established the DRRS Battle Staff, which is a group of midlevel military officers and civilians from DRRS stakeholder organizations, and it established a higher-ranked General and Flag Officer Steering Committee, consisting of stakeholder representatives. However, neither of these entities had specific oversight responsibilities or decision-making authority for DRRS. Moreover, neither was responsible for holding the program office accountable for results. According to meeting minutes and knowledgeable officials, these entities met on an irregular basis over the last several years, with as much as a 1- year gap in meeting time for one of them, to discuss DRRS status and related issues. In December 2007, USD (P&R) recognized the need for a more senior-level and formal governance body, and established the DRRS Executive Committee. Since January 2008, this committee, which consists of top- level representatives from stakeholder organizations, has met at least seven times. In January 2009, the DRRS Executive Committee’s charter was approved by the Deputy Under Secretary of Defense (Readiness) and the three-star Director of the Joint Staff. According to the charter, the committee is to review and approve proposals and plans to establish policy, processes, and system requirements for DRRS, including approving software development milestones required to reach objectives. Consistent with its charter, the committee has thus far made various program-related decisions, including approving a DRRS concept of operations to better inform requirements development, and directing the Joint Staff to conduct an analysis to identify any gaps between DRRS requirements and user needs. However, the committee has not addressed the full range of acquisition management weaknesses previously discussed in this report, and it has not taken steps to ensure that the program office is accountable for well-defined program baseline requirements. More recently, the DOD Human Resources Management Investment Review Board and the Defense Business Systems Management Committee reviewed DRRS and certified and approved, respectively, the program to invest $24.625 million in fiscal years 2009 and 2010. These entities comprise senior leadership from across the department, including the Deputy Secretary of Defense as the Defense Business Systems Management Committee Chair, military service secretaries, the defense agency heads, principal staff assistants, and representatives from the Joint Staff and combatant commands. However, neither the Investment Review Board’s certification nor the Defense Business Systems Management Committee’s approval was based on complete and accurate information from USD (P&R). Specifically, the certification package submitted to both oversight bodies by the USD (P&R) precertification authority (Office of Readiness Programming and Assessment) stated that DRRS was on track for meeting its cost, schedule, and performance measures and highlighted no program risks despite the weaknesses discussed in this report. According to the chairwoman of the Investment Review Board, the board does not have a process or the resources to validate the information received from the programs that it reviews. Moreover, the chairwoman stated that program officials did not make the board aware of the results of our review that we shared with the DIO prior to either the Investment Review Board or Defense Business Systems Management Committee reviews. Since we briefed the chairwoman, the Investment Review Board has requested that the DIO provide it with additional information documenting DRRS compliance with applicable DOD regulations and statutes. According to USD (P&R) and DIO officials, DRRS was not subject to department executive-level oversight for almost 6 years because, among other things, they did not consider DRRS to be a more complex information technology system. Furthermore, because of the nature of the authority provided to the USD (P&R) in the DRRS charter, they did not believe it was necessary to apply the same type of oversight to DRRS as other information systems within DOD. This absence of effective oversight has contributed to a void in program accountability and limited prospects for program success. DOD has implemented DRRS features that allow users to report certain mission capabilities that were not reported under the legacy system, but these features are not fully consistent with legislative requirements and DOD guidance; and DOD has not yet implemented other envisioned features of the system. While some users are consistently reporting enhanced capability information, reporting from other users has been inconsistent. In addition, DRRS has not fully addressed the challenges with metrics that were identified prior to 1999 when Congress required DOD to establish a new readiness reporting system. Users have also noted that DRRS lacks some of the current and historical data and connectivity with DOD’s planning systems necessary to manage and deploy forces. The geographic combatant commands are capturing enhanced capability data in DRRS, and DOD’s quarterly readiness reports to Congress currently contain this information, as well as information that is drawn from DOD’s legacy readiness reporting system, GSORTS. However, the military services have not consistently used the enhanced capability reporting features of DRRS. Because DRRS does not yet fully interface with legacy systems to allow single reporting of readiness data, the Army and Navy developed additional system interfaces and are reporting in DRRS. Until May 2009, the Marine Corps directed its units to report only in the legacy system to avoid the burden of dual reporting. The Air Force chose not to develop an interface and instructed its units to report in both DRRS and the legacy system. DRRS and GSORTS both contain capabilities information and resource (numbers of personnel, equipment availability, and equipment condition) and training data. However, DRRS currently provides more capabilities data than GSORTS. When Congress directed DOD to establish a new readiness reporting system, GSORTS was already providing capability information concerning unit capabilities to perform the missions for which they were organized or designed. More recently, some of the military services began reporting limited capability information on unit capabilities to perform missions other than those that they were organized or designed to perform into GSORTS. However, DRRS is designed to capture capabilities on a wider variety of missions and mission essential tasks. For example, organizations can report their capabilities to conduct missions associated with major war plans and operations such as Operation Iraqi Freedom into DRRS, as well as their capabilities to perform the missions for which they were organized or designed. DRRS also captures capability information from a wider range of organizations than GSORTS. Although the primary (monthly) focus is on operational units and commands, DRRS collects and displays readiness information from defense agencies and installations. Geographic combatant commands—such as U.S. Central Command, U.S. Pacific Command, and U.S. Northern Command—are currently reporting their commands’ capabilities to execute most of their operations and major war plans in DRRS. DOD reports this enhanced capability information from the geographic combatant commands in its Quarterly Readiness Report to Congress. The geographic combatant commands are also using DRRS to report their capabilities to perform headquarters-level, joint mission essential tasks, and some of these commands utilize DRRS as their primary readiness reporting tool. For example, U.S. Northern Command uses DRRS to assess risk and analyze capability gaps, and U.S. Pacific Command identifies critical shortfalls by evaluating mission essential task assessments that are captured in DRRS. While DRRS currently has the necessary software to collect and display these enhanced capability data from organizations at all levels throughout DOD, a variety of technical and other factors have hindered service reporting of capability data. As a result, the services have either developed their own systems to report required readiness data or have delayed issuing implementing guidance that would require their units to report standardized mission essential task data in DRRS. By 2005, DRRS was able to collect and display mission essential task information from any organizations that had access to a Secure Internet Protocol Router Network (SIPRNet) workstation. In August 2005, USD (P&R) issued a memorandum that directed the services to ensure that all of their GSORTS-reporting units were reporting mission essential task capabilities in DRRS by September 30, 2005. The memorandum stated that, for tactical units, mission essential tasks were to be drawn from the Service Universal Task List and standardized across like-type entities, such as tank battalions, destroyers, or F-16 squadrons. However, two factors that have hindered compliance with the memorandum’s direction to report mission essential task capabilities in DRRS are discussed below. While DRRS has been able to collect and display mission essential task data since 2005, some Army and Navy users did not have the means to directly access DRRS and update mission essential task assessments. For example, some ships lacked hardware necessary to be able to transmit their mission essential task data directly into DRRS while at sea. In addition, many National Guard units lacked, and still lack, direct access to the SIPRNet workstations that are necessary to directly input mission essential task data directly into DRRS. However, the Army and the Navy have developed systems, respectively designated DRRS-A and DRRS-N that interface with DRRS and thus allow all of their units to report mission essential task data. After Army and Navy units report mission essential task data in their respective DRRS-A and DRRS-N service systems, the services transmit these data to DRRS. As a result, Army and Navy officials told us that they are currently complying with the requirement to ensure that all their GSORTS-reporting units report mission essential task data in DRRS. Unlike the Army and the Navy, the Marine Corps and the Air Force have not developed their own systems to allow their units to use a single tool to enter readiness data to meet Office of the Secretary of Defense, Chairman of the Joint Chiefs of Staff, and service readiness reporting requirements. While the DIO has developed the software for users to enter mission essential task data into DRRS, the DIO has been unsuccessful in attempts to develop a tool that would allow Air Force and Marine Corps users to enter readiness data to meet all of their readiness reporting requirements through DRRS. As a result, rather than reducing the burden on reporting units, DRRS has actually increased the burden on Air Force and Marine Corps units because they are now required to report readiness information in both DRRS and GSORTS. On September 29, 2005, USD (P&R) issued a memorandum stating that DRRS is the single readiness reporting system for the entire Department of Defense and that legacy systems, such as GSORTS and associated service readiness systems, should be phased out. Since that time, officials have discussed whether to phase out GSORTS and tentative dates for this action have slipped several times. In 2001, the Office of the Deputy Undersecretary of Defense (Readiness) listed reducing reporting burdens as a key characteristic of its envisioned improved readiness assessment system. In an effort to eliminate this burden of dual reporting, the DIO began to develop a “current unit status” tool as a means for users to manage unit-specific readiness data and submit required reports in support of all current reporting guidelines. The tool was to minimize the burden associated with dual reporting by collecting, displaying, and integrating resource data from service authoritative data sources with GSORTS and DRRS. However, in December 2007, the DIO reported that it was unable to deliver the intended functionality of the “current unit status” tool. Instead, the DIO decided to develop an interim reporting tool, known as the SORTSREP tool, which would not provide the type of new capabilities envisioned for the “current unit status” tool, but would simply replicate the functionality of the input tool that the Air Force and Marines already used to input data into GSORTS. After delays, and 10 months of effort, the DIO delivered the SORTSREP tool to the Marine Corps for review. Based on this review, in December, 2008, the Marine Corps provided the developers and the DIO with 163 pages of detailed descriptions and graphics to explain the SORTSREP tool’s deficiencies. It then informed the DIO that it would no longer expend energy and resources to review future versions of the SORTSREP tool and would instead look at leveraging the Army’s or Navy’s DRRS-A or DRRS-N systems. The Air Force also informed the DIO that it was no longer interested in the SORSTSREP tool, and said efforts should be focused on the “current unit status” tool instead. As a result, the Air Force and Marine Corps are currently faced with dual reporting requirements, as illustrated in figure 1. On March 3, 2009, the Air Force Deputy Chief of Staff (Operations, Plans and Requirements) issued a memorandum that updated the Air Force’s previous implementing guidance and directed all GSORTS-reporting units to begin assessing readiness in DRRS based on standardized core task lists within 90 days. As a result, Air Force units will report readiness in both DRRS and GSORTS until the DIO is able to deliver the intended functionality of the “current unit status” tool. While some Marine Corps units are reporting their capabilities in DRRS, the Marine Corps had not yet directed its units to report in the system as of May 2009. The Commandant of the Marine Corps had stated that he supported the development and implementation of DRRS, but that he would not direct units to assess mission essential tasks in DRRS until the system met its stated requirements and was accepted as the single readiness reporting system of record. Marine Corps officials said that they did not want to place a burden on operational units, which were fighting or preparing to fight a war, by requiring that they report readiness in two different systems. After we completed our audit work, on May 12, 2009, the Marine Corps issued an administrative message that required that units assess their mission essential tasks and missions in DRRS. The message stated that doing so would improve familiarity with DRRS, which will lead to an easier transition when the Marine Corps fields DRRS-Marine Corps (DRRS-MC). Without a viable tool for inputting data, DRRS is not fully integrated with GSORTS or with the service readiness reporting systems and it is not capable of replacing those systems since it does not capture the required data that are contained in those systems. While DRRS is being used to provide Congress with enhanced capability information, the quality of DRRS metrics still faces the same challenges, including limitations in timeliness, precision, and objectivity that existed prior to 1999 when Congress directed DOD to establish a new readiness reporting system. Section 117 of Title 10 of the U.S. Code directed the Secretary of Defense to establish a comprehensive readiness reporting system to measure the capability of the armed forces in an “objective, accurate, and timely manner.” However, the enhanced capability data that are captured in DRRS and reported to Congress are no more timely than the readiness data that were being provided to Congress in 1999 using GSORTS. Furthermore, the metrics that are being used to capture the enhanced capability information are less objective and precise than the metrics that were used to report readiness in 1999. The statute directing the development of a new readiness reporting system requires that the reporting system measure in a timely manner the capability of the armed forces to carry out the National Security Strategy, the Secretary of Defense’s defense planning guidance, and the National Military Strategy. The legislation also lists a number of specific requirements related to frequency of measurements and updates. For example, the law requires that the capability of units to conduct their assigned wartime missions be measured monthly, and that units report any changes in their overall readiness status within 24 hours of an event that necessitated the change in readiness status. In its DRRS directive, DOD assigned USD (P&R) responsibility for ensuring the timeliness of DRRS information and data, and it specified that DRRS was to be a near-real-time readiness reporting system. While DOD is reporting readiness information to Congress on a quarterly basis as required, and units are measuring readiness on a monthly basis, DRRS is not a near-real-time reporting system. Specifically, in DRRS, as in GSORTS, operational commanders assess the readiness of their organizations on a monthly basis or when an event occurs that changes the units’ overall reported readiness. Thus, DRRS has not improved the timeliness of the key readiness data that are reported to Congress. According to USD (P&R) officials, DRRS data will be more timely than GSORTS data because DRRS will update underlying data from authoritative data sources between the monthly updates. However, DRRS is not yet capturing all the data from the authoritative data sources, and according to service officials, the service systems that support GSORTS also draw information from their service authoritative data sources between the monthly updates. Furthermore, the source and currency of some of the authoritative data that are currently in DRRS are not clearly identified. As a result, some users told us that they are reluctant to use DRRS data to support their decisions. We previously reported that the readiness information that DOD provided to Congress lacked precision, noting that GSORTS readiness measures that differed by 10 percentage points or more could result in identical ratings, with DOD often not reporting the detailed information behind the ratings outside of the department. For example, units that were at 90 and 100 percent of their authorized personnel strengths both were reported as P-1 in DOD’s reports to Congress. In 2003, USD (P&R) recognized the imprecision of the reported metrics from GSORTS and noted that its efforts to develop DRRS would allow enhancements to reported readiness data. As previously noted, the DRRS capability information that DOD is reporting to Congress covers a broader range of missions than the GSORTS information that was provided in the past. However, when comparing the DRRS and GSORTS assessments of units’ capabilities to perform the missions for which the units were organized or designed, DRRS assessments are actually less precise than the GSORTS assessments. Specifically, within GSORTS, overall capability assessments are grouped into four categories based on four percentage ranges for the underlying data. For example, commanders compare on-hand and required levels of personnel and equipment. Within DRRS, mission essential task assessments are reported on a scale that includes only three ratings— ”yes”, “no”, and “qualified yes,” which can include any assessments that fall between the two extremes. The law directing DOD to establish a new readiness reporting system also requires that the system measure readiness in an objective manner. GSORTS assessments of units’ capabilities to execute the missions for which they were organized or designed are based on objective personnel and equipment data and training information that may include both objective and subjective measures. Furthermore, the overall capability assessment in GSORTS is based on an objective rule that calls for the overall assessment to be the same level as the lowest underlying resource or training data level. For example, if a unit reported the highest personnel level (P-1) and the lowest training level (T-4), the rules in the GSORTS guidance instruct the commander to rate the unit’s overall capability at the C-4 level. Because GSORTS contains these objective measures and rules, it is easy to evaluate reported readiness to see if it aligns with established reporting criteria. Within DRRS, organizations rate their capabilities based on mission essential tasks. These mission essentials tasks have conditions and standards associated with them. The conditions specify the types of environments that units are likely to face as they execute the tasks, such as weather conditions and political or cultural factors. Standards describe what it means for the unit to successfully execute the task under specified conditions. For example, a unit may have to achieve a 90 percent success rate for measures associated with the task being assessed. In spite of these conditions and standards, DRRS mission assessments are often subjective rather than objective. In DRRS program guidance, DOD has defined mission essential tasks as tasks that are approved by the commander and that, based on mission analysis, are “absolutely necessary, indispensable, or critical to mission success.” In prior briefings and reports to Congress, we have noted examples that highlight the subjective nature of DRRS mission assessments. For example, we noted that one commander used his professional judgment to decide that his command was “qualified” to execute a mission even though the preponderance of the “indispensable” tasks that supported that mission were rated as “no.” In contrast, other commanders used their professional judgments to rate missions as “qualified” based on one or more “qualified” tasks among many “yes” tasks. DRRS does not have all of the resource, training, readiness data, and connectivity with the department’s operations planning and execution system that the services, Joint Staff, and certain combatant commands need to manage and deploy forces. As a result, DRRS is not yet able to operate as the department’s single readiness reporting system, as intended. The Secretary of Defense’s and the Under Secretary of Defense’s guidance documents recognize that DRRS needs to support the data requirements of multiple users. For example, the Secretary of Defense’s June 25, 2004, memorandum directed USD (P&R) to develop DRRS to support the data requirements identified by the Chairman of the Joint Chiefs of Staff, the combatant commanders, the Secretaries of the military departments, and the Chief of the National Guard Bureau. Furthermore, the 2002 DRRS directive noted that DRRS was to build upon DOD’s existing processes and readiness assessment tools and that ESORTS information (capability, resource, and training), where appropriate, is integrated into DOD’s planning systems and processes. It also directed the Chairman of the Joint Chiefs of Staff to maintain the GSORTS database until key capabilities of DRRS become fully operational. Officials with U.S. Joint Forces Command and U.S. Special Operations Command reported that historical data are needed to manage forces and provide users the ability to analyze readiness trends. Similarly, service officials stated a need for historical data so they can manage their forces and take action to address readiness issues. In 2005, USD (P&R) reported that unit resource data, including detailed inventory and authorization data on personnel, equipment, supply, and ordnance were available in DRRS. However, in response to a survey we conducted in December 2008, the services and certain combatant commands stated that necessary current and historical resource and training data were not available in DRRS. For example, officials from all four services responded that DRRS, at that time, contained less than half of their GSORTS resources and training data. In addition, officials from U.S. Joint Forces Command, U.S. Special Operations Command, the U.S. Strategic Command, and the U.S. Transportation Command all responded that historical resource data were not available in DRRS. We confirmed that this information was still not available when we concluded our review, and in April, 2009, the DIO said it was still working on this data availability issue. Furthermore, user organizations have reported inaccuracies in the data that are available in DRRS. Marine Corps and U.S. Special Operations Command officials stated that inconsistencies between DRRS data and the data in other readiness systems have caused them to adjudicate the inconsistencies by contacting their subordinate units directly. Army officials noted that searches of DRRS data can produce different results than searches in the Army’s data systems. For example, they noted that a DRRS search for available personnel with a particular occupational specialty produced erroneously high results because DRRS did not employ the appropriate business rules when conducting the search. Specifically, DRRS did not apply a business rule to account for the fact that an individual person can possess multiple occupational specialty codes but can generally fill only one position at a time. DIO officials informed us that they intend to correct issues with the accuracy of data drawn from external databases. However, the current version of the DRRS Integrated Master Schedule indicates that the ability of DRRS to provide the capability to correctly search, manipulate, and display current and historical GSORTS and mission essential task data will not be complete until June 2010. As a result, the reliability of the DRRS data is likely to remain questionable and a number of DOD organizations will likely continue to rely on GSORTS and other sources of readiness data to support their decision making. One important DRRS function is integration with DOD’s planning systems. Specifically, the 2002 DRRS directive requires USD (P&R) to ensure that, where appropriate, ESORTS information (capability, resource, and training) is compatible and integrated into DOD’s planning systems and processes. Global force management is one of the DOD planning processes that is to be integrated with DRRS. Global Force Management is a process to manage, assess, and display the worldwide disposition of U.S. forces, providing DOD with a global view of requirements and availability of forces to meet those requirements. The integration of DRRS with global force management planning processes is supposed to allow DOD to link force structure, resources, and capabilities data to support analyses, and thus help global force managers fill requests for forces or capabilities. Officials from the four organizations with primary responsibilities for providing forces (U.S. Joint Forces Command, U.S. Special Operations Command, U.S. Strategic Command, and U.S. Transportation Command) all stated that they are unable to effectively use DRRS to search for units that will meet requested capabilities. These commands also reported that DRRS does not currently contain the information and tools necessary to support global force management. For example, officials from U.S. Northern Command told us that when they used DRRS to search for available helicopters of a certain type, they found thousands, but when U.S. Joint Forces Command did the same DRRS search they found hundreds. The current version of the DRRS Integrated Master Schedule indicates that DRRS will not be able to fully support global force management until March 2011. As a result, these commands continue to rely on GSORTS rather than DRRS to support their planning and sourcing decisions. DRRS is not currently and consistently providing timely, objective, and accurate information, and it is not exactly clear where the department stands in its efforts to meet this expectation because system requirements remain in a state of flux, and the program office lacks disciplined program management and results information due to a long-standing lack of rigor in its approach to acquiring and deploying system capabilities. This situation can be attributed, in part, to long-standing limitations in the program office’s focus on acquiring human capital skills needed to manage such a complex initiative. It can also be linked to many of years of limited program office oversight and accountability. Although program oversight has recently increased, oversight bodies have not had sufficient visibility into the program’s many management weaknesses. DRRS is providing Congress and readiness users with additional mission and mission essential task capability data that were not available in GSORTS. However, after investing about 7 years and about $96.5 million in developing and implementing DRRS, the system’s schedule has been extended, requirements are not stable, and the system still does not meet congressional and DOD requirements for a comprehensive readiness reporting system to assess readiness and help decision makers manage forces needed to conduct combat and contingency operations around the world. Given DRRS performance and management weaknesses, it is critical that immediate action be taken to put the program on track and position it for success. Without this action, it is likely that DRRS will cost more to develop and deploy than necessary and that DOD will not have a comprehensive reporting system that meets the needs of all the decision makers who rely on accurate, timely, and complete readiness information. To address the risks facing DOD in its acquisition and deployment of DRRS, and to increase the chances of DRRS meeting the needs of the DOD readiness community and Congress, we recommend that the Secretary of Defense direct the Deputy Secretary of Defense, as the Chair of the Defense Business Systems Management Committee, to reconsider the committee’s recent approval of DRRS planned investment for fiscal years 2009 and 2010, and convene the Defense Business Systems Management Committee to review the program’s past performance and the DIO’s capability to manage and deliver DRRS going forward. To fully inform this Defense Business Systems Management Committee review, we also recommend that the Secretary direct the Deputy Secretary to have the Director of the Business Transformation Agency, using the appropriate team of functional and technical experts and the established risk assessment methodology, conduct a program risk assessment of DRRS, and to use the findings in our report and the risk assessment to decide how to redirect the program’s structure, approach, funding, management, and oversight. In this regard, we recommend that the Secretary direct the Deputy Secretary to solicit the advice and recommendations of the DRRS Executive Committee. We also recommend that the Secretary, through the appropriate chain of command, take steps to ensure that the following occur: 1. DRRS requirements are effectively developed and managed with appropriate input from the services, Joint Staff, and combatant commanders, including (1) establishing an authoritative set of baseline requirements prior to further system design and development; (2) ensuring that the different levels of requirements and their associated design specifications and test cases are aligned with one another; and (3) developing and instituting a disciplined process for reviewing and accepting changes to the baseline requirements in light of estimated costs, benefits, and risk. 2. DRRS testing is effectively managed, including (1) developing test plans and procedures for each system increment test event that include a schedule of planned test activities, defined roles and responsibilities, test entrance and exit criteria, test defect management processes, and metrics for measuring test progress; (2) ensuring that all key test events are conducted on all DRRS increments; (3) capturing, analyzing, reporting, and resolving all test results and test defects of all developed and tested DRRS increments; and (4) establishing an effective test management structure that includes assigned test management roles and responsibilities, a designated test management lead and a supporting working group, and a reliable schedule of test events. 3. DRRS integrated master schedule is reliable, including ensuring that the schedule (1) captures all activities from the work breakdown structure, including the work to be performed and the resources to be used; (2) identifies the logical sequencing of all activities, including defining predecessor and successor activities; (3) reflects whether all required resources will be available when needed and their cost; (4) ensures that all activities and their duration are not summarized at a level that could mask critical elements; (5) achieves horizontal integration in the schedule by ensuring that all external interfaces (hand-offs) are established and interdependencies among activities are defined; (6) identifies float between activities by ensuring that the linkages among all activities are defined; (7) defines a critical path that runs continuously to the program’s finish date; (8) incorporates the results of a schedule risk analysis to determine the level of confidence in meeting the program’s activities and completion date; and (9) includes the actual start and completion dates of work activities performed so that the impact of deviations on downstream work can be proactively addressed. 4. The DRRS program office is staffed on the basis of a human capital strategy that is grounded in an assessment of the core competencies and essential knowledge, skills, and abilities needed to perform key DRRS program management functions, an inventory of the program office’s existing workforce capabilities, and an analysis of the gap between the assessed needs and the existing capabilities. 5. DRRS is developed and implemented in a manner that does not increase the reporting burden on units and addresses the timeliness, precision, and objectivity of metrics that are reported to Congress. To ensure that these and other DRRS program management improvements and activities are effectively implemented and that any additional funds for DRRS implementation are used effectively and efficiently, we further recommend that the Secretary direct the Deputy Secretary to ensure that both the Human Resources Management Investment Review Board and the DRRS Executive Committee conduct frequent oversight activities of the DRRS program, and report any significant issues to the Deputy Secretary. In written comments on a draft of this report, signed by the Deputy Under Secretary of Defense (Military Personnel Policy) performing the duties of the Under Secretary of Defense (Personnel and Readiness), DOD stated that the report is flawed in its assessment of DRRS, noting that DRRS is a net-centric application that provides broad and detailed visibility on readiness issues, and that achieving data sharing across the DOD enterprise was groundbreaking work fraught with barriers and obstacles, many of which have now been overcome. In addition, DOD stated that it was disappointed that the report did not address cultural impediments that it considers to be the root cause of many of the issues cited in the report and of many previous congressional concerns on readiness reporting. DOD further stated that the report instead focuses on past acquisition process and software development problems that it believes have now been remedied According to the department, this focus, coupled with inaccurate and misleading factual information included in the report, led us to develop an incomplete picture of the program. Notwithstanding these comments, DOD agreed with two of our recommendations and partially agreed with a third. However, it disagreed with the remaining five recommendations, and provided comments relative to each recommendation. DOD’s comments are reprinted in their entirety in appendix III. In summary, we do not agree with DOD’s overall characterization of our report or the positions it has taken in disagreeing with five of our recommendations, finding them to be inconsistent with existing guidance and recognized best practices on system acquisition management, unsupported by verifiable evidence, and in conflict with the facts detailed in our report. Further, we recognize that developing DRRS is a significant and challenging undertaking that involves cultural impediments. As a result, our report explicitly focuses on the kind of program management rigor and disciplines needed to address such impediments and successfully acquire complex systems, including effective requirements development and management and executive oversight. We also disagree that our report focuses on past issues and problems. Rather, it provides evidence that demonstrates a long-standing and current pattern of system acquisition and program oversight weaknesses that existed when we concluded our audit work and that DOD has not provided any evidence to demonstrate has been corrected. In addition, we would emphasize that we defined our objectives, scope, and methodology, and executed our audit work in accordance with generally accepted government auditing standards, which require us to subject our approach as well as the results of our audit work to proven quality assurance checks and evidence standards that require us to seek documentation rather than relying solely on testimonial evidence. While we support any departmental efforts, whether completed or ongoing, that would address the significant problems cited in our report, we note that DOD, in its comments, did not specifically cite what these efforts are or provide documentation to support that they have either been completed or are ongoing. Therefore, we stand by our findings and recommendations. Moreover, we are concerned that in light of the program’s significant and long-standing management weaknesses, the department’s decision not to pursue recommendations aimed at corrective actions for five of our eight recommendations will further increase risk to achieving program success, and is not in the best interests of the military readiness community or the U.S. taxpayer. Accordingly, we encourage the department to reconsider its position when it submits its written statement of the actions taken on our recommendations to the Senate Committee on Homeland Security and Governmental Affairs and the House Committee on Oversight and Government Reform , as well has the House and Senate Committees on Appropriations, as required under 31 U.S.C. 720. DOD’s specific comments on each recommendation, along with our responses to its comments follow. The department did not agree with our recommendation for the Deputy Secretary of Defense, as the Chair of the Defense Business Systems Management Committee, to reconsider the committee’s recent approval of DRRS planned investment for fiscal years 2009 and 2010, and to convene the Defense Business Systems Management Committee to review the program’s past performance and the DIO’s capability to manage and deliver DRRS going forward in deciding how best to proceed. In this regard, DOD stated that the Investment Review Board certification and Defense Business Systems Management Committee approval were granted in compliance with the established processes. It also added that oversight of the specific issues identified in this report are the responsibility of the DRRS Executive Committee, which it stated has and will continue to provide appropriate governance for this effort. It also stated that USD (P&R) will ensure that the program is compliant with all acquisition requirements prior to submission for further certifications. We do not question whether the Investment Review Board certification and Defense Business Systems Management Committee approval were provided in accordance with established processes, as this is not relevant to our recommendation. Rather, our point is that the Investment Review Board and Defense Business Systems Management Committee were provided, and thus based their respective decisions, on erroneous and incomplete information about DRRS progress, management weaknesses, and risks. Moreover, neither the Investment Review Board nor the Defense Business Systems Management Committee were informed about the findings in our report, even though we shared each of them with the DRRS program director and other DIO officials prior to both the Investment Review Board and the Defense Business Systems Management Committee deliberations. Therefore, while the Investment Review Board certification and the Defense Business Systems Management Committee approval were granted in accordance with established processes, they were not based on a full disclosure of facts. Moreover, while we support DOD’s comment that it will ensure that the program is in compliance with all acquisition requirements prior to further certifications, nothing precludes the board or the committee from reconsidering their respective decisions in light of our report. With respect to DOD’s comment that the DRRS Executive Committee has and will continue to provide appropriate governance for this effort, we do not disagree that the DRRS Executive Committee has an oversight role. However, the DRRS Executive Committee should not be solely responsible for oversight of the specific issues in our report. Both the Investment Review Board and the Defense Business Systems Management Committee provide additional layers of oversight pursuant to law and DOD policy. Accordingly, we stand by our recommendation as it appropriately seeks to have the Investment Review Board and Defense Business Systems Management Committee, in collaboration with the DRRS Executive Committee, act in a manner that is consistent with their respective roles as defined in law. In doing so, our intent is to promote accountability for DRRS progress and performance, and prompt action to address the many risks facing the program. The department agreed with our recommendation for the Deputy Secretary of Defense, as the chair of the Defense Business Systems Management Committee, to have the Business Transformation Agency conduct a risk assessment of DRRS, and with the advice and recommendation of the DRRS Executive Committee, to use the results of this assessment and the findings in our report to decide how to redirect the program. In this regard, the department stated that this assessment will be complete by the middle of fiscal year 2010. The department did not agree with our recommendation for ensuring that DRRS requirements are effectively developed and managed. In this regard, it stated that the program has an authoritative set of baseline requirements established with an effective governance process for overseeing the requirements management process, to include biweekly reviews as part of the DRRS configuration control process. We do not agree. At the time we concluded our work, DRRS requirements were not stable, as evidenced by the fact that an additional 530 requirements had been identified that the DIO was still in the process of reviewing and had yet to reach a position on their inclusion, or process them through the DRRS change control governance process. Moreover, when we concluded our work, this change control process had yet to be approved by the DRRS governance structure. As we state in our report, the introduction of such a large number of requirements provided a compelling basis for concluding that requirements had yet to progress to the point that they could be considered sufficiently complete and correct to provide a stable baseline. Our recommendation also noted that the Secretary should take steps to ensure that the different levels of requirements be aligned with one another. DOD’s comments did not address this aspect of our recommendation. The department did not agree with our recommendation for ensuring that DRRS testing is effectively managed. In this regard, it stated that DRRS testing is already in place and performing effectively, and stated, among other things, that (1) the DIO goes through a rigorous testing regimen that includes documenting test plans with user test cases for each incremental release to include utilizing system integration, acceptance, interoperability, and operational testing; (2) user test cases and functionality are validated by designated testers independent of the developers prior to a deployment; and (3) for interoperability testing the DIO has a designated test director and the Joint Interoperability Test Command (JITC) is the designated interoperability and operational test activity. We do not agree. As our report concludes, DRRS testing has not been effectively managed because it has not followed a rigorous testing regimen that includes documented test plans, cases, and procedures. To support this conclusion, our report cites numerous examples of test planning and execution weaknesses, as well as the DIO’s repeated inability to demonstrate through requisite documentation that the testing performed on DRRS has been adequate. Our report shows that test events for already acquired, as well as currently deployed and operating, DRRS releases and subreleases were not based on well- defined plans and DOD had not filled its testing director vacancy. Further, our report shows that test events were not fully executed in accordance with plans that did exist, or executed in a verifiable manner, or both. For example, although increments of DRRS functionality had been put into production, the program had no documentation (e.g., test procedures, test cases, test results) to show that the program office had performed system integration testing, system acceptance testing, or operational testing on any DRRS release or subrelease, even though the DIO’s test strategy stated that such tests were to be performed before system capabilities became operational. Moreover, evidence showed that the results of all executed test events had not been captured and used to ensure that problems discovered were disclosed to decision makers, and ultimately corrected. With respect to DOD’s comments that JITC is the designated lead for interoperability and operational testing, our report recognizes that JITC is to conduct both interoperability and operational testing before the system is deployed and put into production (i.e., used operationally). However, during the course of our audit, the DIO could not produce any evidence to show that interoperability and operational testing of all operating system increments had been conducted. The department did not agree with our recommendation for ensuring that the DRRS integrated master schedule is reliable. In this regard, it stated that a process is already in place to ensure that the schedule is current, reliable, and meets all the criteria outlined in the recommendation. We do not agree. As our report states, an integrated master schedule for DRRS did not exist until November 2008, which was 2 months after we first requested one. Moreover, following our feedback to the DIO on limitations in this initial version, a revised integrated master schedule was developed in January 2009, which was also not reliable. Subsequently, a revised integrated master schedule was developed in April 2009. However, as we detail in our report, that version still contained significant weaknesses. For example, it did not establish a critical path for all key activities or include a schedule risk analysis, and was not being updated using logic and durations to determine the dates for all key activities. These practices are fundamental to producing a sufficiently reliable schedule baseline that can be used to measure progress and forecast slippages. In addition, the schedule introduced considerable concurrency across key activities and events for several modules, which introduces increased risk. Therefore, we stand by our recommendation. The department partially agreed with our recommendation for ensuring that it has an effective human capital strategy. In this regard, it stated that actions are underway to add more full-time civilian support to the DIO, and that plans exist to convert some contractor to civilian billets during the 2010/2011 time frame. We support the department’s actions and plans described in its comments to address the DIO human capital management limitations discussed in our report, but would note that they do not go far enough to systematically ensure that the program has the right people with the right skills to manage the program in both the near term and the long term. To accomplish this, the department needs to adopt the kind of strategic and proactive approach to DRRS workforce management that our report describes and our recommendation embodies. As our evaluations and research show, failure to do so increases the risk that the program office will not have the people it needs to effectively and efficiently manage DRRS. Therefore, we believe that the department needs to fully implement our recommendation. The department did not agree with our recommendation to take steps to ensure that DRRS is developed and implemented in a manner that does not increase the reporting burden on units and addresses the timeliness, precision, and objectivity of metrics that are reported to Congress. In this regard, it stated that one of the primary tenets of DRRS has been to reduce reporting requirements on the war fighter. It also stated that DRRS is already using state-of-the-art technology to ensure that near-real-time data are available for the war fighters. Finally it stated that the DRRS governance structure that is currently in place ensures that DRRS development does not deviate from these core principles. While we recognize that a goal of DRRS is to reduce a reporting burden on the war fighter, we disagree with the department’s position because the system has not yet achieved this goal. As our report states, while the DIO has developed the software for users to enter mission essential task data into DRRS, the DIO has been unsuccessful in attempts to develop a tool that would allow Air Force and Marine Corps users to enter readiness data to meet all of their readiness reporting requirements through DRRS. As a result, rather than reducing the burden on reporting units, DRRS actually increased the burden on Air Force and Marine Corps units because they were required to report readiness information in both DRRS and GSORTS. Without a viable tool for inputting data, DRRS is not fully integrated with GSORTS or with the service readiness reporting systems and it is not capable of replacing those systems since it does not capture the required data that are contained in those systems. In addition, the DRRS readiness data that are currently reported to Congress are not near-real-time data. Specifically, the periodicity for DRRS capability assessments is the same as the legacy GSORTS system’s readiness reports—monthly or when an event occurs that changes a unit’s overall readiness. Furthermore, our report shows that DRRS mission assessments are often subjective and imprecise because they are reported on a scale that includes only three ratings—”yes,” “no,” and “qualified yes,” which can include any assessments that fall between the two extremes. Therefore, because additional actions are still needed to reduce reporting burdens and improve the timeliness, precision, and objectivity of the DRRS data that are reported to Congress, we stand by our recommendation. The department agreed with our recommendation for ensuring that both the Human Resources Management Investment Review Board and the DRRS Executive Committee conduct frequent oversight activities of the DRRS program and report any significant issues to the Deputy Secretary of Defense. In this regard, the department stated that the USD (P&R) component acquisition executive is working with the program to ensure that it becomes fully compliant with all acquisition requirements. In addition, it stated that the acquisition executive will certify to the Human Resources Investment Review Board and the Deputy Chief Management Officer of compliance prior to submission of future certification requests. Further, it stated that the current DRRS governance process will provide sustained functional oversight of the program and that issues that arise in any of these areas will be elevated for review, as appropriate. We believe these are positive steps. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; and the Director, Office of Management and Budget. The report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have questions about this report, please contact us at [email protected] or [email protected] or at our respective phone numbers, (202) 512-9619 and (202) 512- 3439. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Our objectives were to assess the extent to which (1) the Department of Defense (DOD) has effectively managed and overseen DRRS acquisition and deployment, and (2) features of the Defense Readiness Reporting System (DRRS) have been implemented and are consistent with legislative requirements and DOD guidance. We did not evaluate the department’s overall ability to assess the readiness of its forces or the extent that data contained in any of its readiness reporting systems, including DRRS and the Global Status of Resources and Training System (GSORTS), reflect capabilities, deficiencies, vulnerabilities, or performance issues. Our review focused on acquisition and program management issues, such as requirements management, schedule, and human capital requirements; the current usage of DRRS; and the extent to which DRRS’ features address legislative requirements and DOD guidance. To determine the extent to which the DRRS acquisition and deployment has been effectively managed and overseen, we focused on the following acquisition management areas: (1) requirements development and management, (2) test planning and execution, (3) DRRS schedule reliability, and (4) human capital planning. In doing so, we analyzed a range of program documentation, such as high-level and detailed-level requirements documentation, test plans and reports, the current DRRS schedule, and program management documentation and interviewed cognizant program and contractor officials. To determine the extent to which the program had effectively implemented requirements development and management, we reviewed relevant program documentation, such as the concept of operations document, capability requirements document, software requirements document, requirements traceability matrix, configuration management plan, and the program management plan, as well as minutes of change control board meetings, and evaluated them against relevant guidance. Moreover, we reviewed briefing slides from meetings of DRRS oversight bodies in order to identify concerns about DRRS expressed by representatives from the DRRS community of users, as well as the efforts by the Joint Staff (at the direction of DRRS Executive Committee) to identify and address any gaps identified by users in the development of DRRS requirements. To determine the extent to which the program has maintained traceability backward to high-level business operation requirements and system requirements, and forward to system design specifications and test plans, we randomly selected 60 program requirements and traced them both backward and forward. This sample was designed with a 5 percent tolerable error rate at the 95 percent level of confidence so that, if we found zero problems in our sample, we could conclude statistically that the error rate was less than 5 percent. In addition, we interviewed program and development contractor officials to discuss the requirements development and management process. To determine if the DRRS Implementation Office (DIO) is effectively managing DRRS testing, we reviewed relevant documentation, such as the DRRS Test and Evaluation Master Plans and test reports and compared them to DOD and other relevant guidance. Further, we reviewed developmental test plans and procedures for each release/subrelease that to date has either been developed or fielded and compared them with best practices to determine whether test activities had been adequately documented. We also examined test results and reports for the already acquired, as well as currently deployed and operating, DRRS releases and subreleases and compared them against plans to determine whether they had been executed in accordance with plans. Moreover, we reviewed key test documentation, such as the Software Version Descriptions, and compared them against relevant guidance to determine whether defect data were being captured, analyzed, prioritized, and reported. We also interviewed program and contractor officials to gain clarity beyond what was included in the program documentation, including the Defense Information Systems Agency’s Joint Interoperability Test Center in order to determine the results of their efforts to independently test DRRS interoperability. In addition, to determine the extent to which the program had effectively tested its system requirements, we observed the DIO’s efforts to demonstrate the traceability of 60 program requirements to test cases and results. This sample was designed with a 5 percent tolerable error rate at the 95 percent level of confidence so that, if we found zero problems in our sample, we could conclude statistically that the error rate was less than 5 percent. To determine the extent to which the program’s schedule reflects key estimating practices that are fundamental to having a reliable schedule, we reviewed the DRRS integrated master schedules and schedule estimates and compared them with relevant guidance. We also used schedule analysis software tools to determine whether the latest schedule included key information, such as the activities critical to on- time completion of DRRS, a logical sequence of activities, and evidence that the schedule was periodically updated. We also reviewed the schedule to determine the time frames for completing key program activities and to determine any changes to key milestones. In addition, we shared the results of our findings with program and contractor officials and asked for clarifications. We then reviewed the revised schedule, prepared in response to the weaknesses we found, and compared it with relevant guidance. To evaluate whether DOD is adequately providing for the DRRS program’s human capital needs, we compared the program’s efforts against relevant criteria and guidance, including our own framework for strategic human capital management. In doing so, we reviewed key program documentation, such as the program management plan and the DIO organizational structure to determine whether it reflected key acquisition functions and identified whether these functions were being performed by government or contractor officials. We interviewed key officials to discuss workforce analysis and human capital planning efforts. To determine the level of oversight and governance available to the DRRS community of users, we attended relevant meetings, met with officials responsible for program certification, and reviewed relevant guidance and program documentation. Specifically, we attended Battle Staff meetings and analyzed briefing slides and meeting minutes from the DRRS Executive Committee, General and Flag Officer’s Steering Committee, and Battle Staff meetings—the main DRRS governance bodies. In addition, we reviewed key DRRS certification and approval documentation provided by the Human Resources Management Investment Review Board, such as economic viability analyses and the business system certification dashboard and met with Investment Review Board officials to determine the basis for certifying and approving DRRS. To determine the extent to which the features of DRRS have been implemented and are consistent with legislative requirements and DOD guidance, we first examined the language of Section 117 of Title 10 of the United States Code, which directs the Secretary of Defense to establish a comprehensive readiness reporting system. We identified criteria for this system in DOD’s directive formally establishing the system. We evaluated the system by conducting interviews—see table 2 below for a list of these organizations—and receiving system demonstrations from members of the readiness community to determine how they used DRRS and how their usage compared with the criteria established for the system. We also conducted content and data analysis of system documents and briefing packages provided by the DIO and Joint Staff. In order to capture the broadest amount of data about the system we conducted a survey of readiness offices at all of the service headquarters, combatant commands, and the National Guard Bureau regarding how DRRS was currently being used and the types and amount of data available in the system. In addition, to track the development of DRRS capabilities, we attended Battle Staff meetings and analyzed documentation from meetings of all the DRRS governance bodies. We also searched for and extracted information from DRRS in order to support other GAO ongoing readiness reviews. While our usage of the system was not intended as a formal test of the system, our general observations concerning system functionality and the range of available data were consistent with the observations of most other users, which were noted in our survey. We conducted our work from April 2008 through August 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Our research and evaluations of information technology programs have shown that the ability to deliver promised system capabilities and benefits on time and within budget largely depends on the extent to which key program management disciplines are employed by an adequately staffed program management office. Among other things, these disciplines include a number of practices associated with effectively developing and managing system requirements, adequately testing system capabilities, and reliably scheduling the work to be performed. They also include proactively managing the program office’s human capital needs, and promoting program office accountability through effective program oversight. Department of Defense (DOD) acquisition policies and guidance, along with other relevant guidance, recognize the importance of these management and oversight disciplines. As we have previously reported, not employing these and other program management disciplines increases the risk that system acquisitions will not perform as intended and require expensive and time-consuming rework. Defense Readiness Reporting System (DRRS) acquisition and deployment has for years not been effectively managed in accordance with these key program management disciplines that are recognized in DOD and other relevant guidance, and are fundamental to delivering a system that performs as intended on time and within budget. Well-defined and well-managed requirements are a cornerstone of effective system development and acquisition. According to recognized guidance, documenting and implementing a disciplined process for developing and managing requirements can help to reduce the risks of producing a system that is not adequately tested, does not meet user needs, and does not perform as intended. Effective requirements development and management includes, among other things, (1) effectively eliciting user needs early and continuously in the system life- cycle process, (2) establishing a stable baseline set of requirements and placing this baseline under configuration management, (3) ensuring that system requirements are traceable backward to higher level business or operational requirements (e.g., concept of operations) and forward to system design documents (e.g., software requirements specification) and test plans, and (4) controlling changes to baseline requirements. DRRS requirements have not been effectively developed and managed. Specifically, (1) key users have only recently become engaged in developing requirements, (2) requirements have been experiencing considerable change and are not yet stable, (3) different levels of requirements and related test cases have not been aligned with one another, and (4) changes to requirements have not been effectively controlled. As a result, efforts to develop and deliver initial DRRS capabilities have taken longer than envisioned and these capabilities have not lived up to the readiness communities’ expectations. These failures increase the risk of future DRRS capabilities not meeting expectations and ensure that expensive and time-consuming system rework will be necessary. One of the leading practices associated with effective requirements development is engaging system users early and continuously in the process of defining requirements. As we have previously reported, assessing user needs early in the process increases the probability of success in defining, designing, and delivering a system that meets user needs and performs as intended. To the DRRS Implementation Office’s (DIO) credit, the October 2008 DRRS Risk Management Plan recognizes this by stating that the success of DRRS depends on participation and support from the broad readiness community, which includes combatant commands, Joint Staff, and the military services. However, until recently, key users were not effectively engaged in DRRS requirements development and management, although reasons vary why they have not. Specifically, DIO officials told us that beginning in 2002, they reached out to all user groups—combatant commands, Joint Staff, and the military services—in defining requirements. For example, they cited a July 2002 memorandum issued by the Office of the Under Secretary of Defense for Personnel and Readiness (USD P&R) that encouraged the Director of the Joint Chiefs of Staff, Deputy Commanders of the Combat Commands, Service Operations Deputies, and Directors of Defense Agencies to actively support the DRRS effort by ensuring that their organizations are represented at Battle Staff meetings. However, these officials told us that the military services and Joint Staff chose not to participate. In contrast, officials from these user groups told us their involvement had been limited by what they characterized as difficulties in submitting requirements through the DRRS governance boards that were in place at that time. For example, an official from the Joint Forces Command said that the Forces Battle Staff governance board did not meet for about a year between 2005 and 2006. Further, the official said that the meetings that were held did not offer users the opportunity to discuss their concerns or influence the requirements process. Similarly, an official from the Marine Corps cited a lack of clear and transparent communication from the DIO as a significant impediment. Notwithstanding this lack of stakeholder involvement in setting requirements, the Office of USD (P&R) developed and issued a DRRS concept of operations in 2004, which DIO officials told us was based on input from the combatant commands, relevant departmental directives, and DRRS governance boards (e.g., Battle Staff). In our view, this document provided a high-level overview of proposed DRRS capabilities from which more detailed requirements could be derived. However, the concept of operations was not approved by all key players in the readiness community. Specifically, DIO officials stated that the document had not been approved by the military services and the Joint Staff. According to these officials, the reason for not seeking all stakeholders’ approval, and the decision to begin developing more detailed requirements in the absence of an approved concept of operations, was that the 2002 DRRS DOD directive provided a sufficient basis to begin developing and deploying what they anticipated being the initial versions of DRRS. In 2008, after 6 years of effort to define DRRS requirements and develop and deploy system capabilities, the Joint Staff, at the direction of the DRRS Executive Committee, conducted an analysis of DRRS capabilities—referred to as the “capabilities gap analysis.” To the Joint Staff’s credit, this analysis has appropriately focused on soliciting comments from the entire readiness community and on identifying any gaps between the DRRS requirements and the needs of this community. As will be discussed in the next section, this analysis resulted in 530 additional user requirements. The extended period of limited involvement by key DRRS users in defining a concept of operations and related capabilities and requirements has impeded efforts to develop a clear understanding of DRRS expectations, constraints, and limitations, which, in turn, has contributed to significant delays in providing the readiness community with needed system support. While the recent Joint Staff action to engage the entire DRRS user community is a positive step towards overcoming this long-standing problem, it remains to be seen whether this engagement will produce agreement and commitment across the entire readiness user community around DRRS requirements. As previously noted, establishing an authoritative set of baseline requirements prior to system design and development is necessary to design, develop, and deliver a system that performs as intended and meets users’ operational needs. In general, a baselined set of requirements are those that are defined to the point that extensive changes are not expected, placed under configuration management, and formally controlled. DRRS requirements are currently in a state of flux. Specifically, the fact that 530 new user requirements have recently been identified means that the suite of requirements documentation associated with the system will need to be changed and thus are not stable. To illustrate, program officials told us that, as of late February 2009, these 530 new requirements had not been fully evaluated by the DIO and DRRS governance boards and thus not yet approved. As a result, their impact on the program is not clear. Compounding this instability in the DRRS requirements is the fact that additional changes are envisioned. According to program officials, the changes resulting from the gap analysis and reflected in the latest version of the DRRS concept of operations, which was approved by the DRRS Executive Committee in January 2009, have yet to be reflected in other requirements documents, such as the detailed system requirements. Although defining and developing requirements is inherently an iterative process, having a baseline set of requirements that are stable is a prerequisite to effective and efficient development of an operationally capable and suitable system. Without them, the DIO will not be able to deliver a system that meets user needs on time, and it is unlikely that future development and deployment efforts will produce better results. One of the leading practices associated with developing and managing requirements is maintaining bidirectional traceability from high-level operational requirements (e.g., concept of operations and functional requirements) through detailed lower-level requirements and design documents (e.g., software requirements specification) to test cases. Such traceability is often accomplished through the use of a requirements traceability matrix, which serves as a crosswalk between different levels of related requirements, design, and testing documentation. The DRRS program management plan recognizes the importance of traceability, stating that requirements are to be documented and linked to acceptance tests, scripts, and criteria. Despite the importance of traceability, DIO officials could not demonstrate that requirements and related system design and testing artifacts are properly aligned. Specifically, we attempted on three separate occasions to verify the traceability of system requirements backward to higher-level requirements and forward to lower-level software specifications and test cases, and each time we found that traceability did not exist. Each attempt is discussed here: In November 2008, our analysis of the requirements traceability matrix and the software requirements specification showed significant inconsistencies. For example, the traceability matrix did not include 29 requirements that were included in the software requirements specification. As a result, we did not have an authoritative set of requirements to use to generate a random sample of requirements to trace. Program officials attributed the inconsistencies to delays in updating all the documents to reflect the aforementioned capability gap analysis. They also stated that these documents would be updated by December 2008. In December 2008, we used an updated requirements traceability matrix to generate a randomized sample of 60 software requirements specifications and observed a DIO demonstration of the traceability of this sample. However, DIO officials were unable to demonstrate for us that these specifications could be traced backward to higher-level requirements and forward to test cases. Specifically, attempts to trace the first 21 requirements forward to test cases failed, and DIO officials stated that they could not trace the 60 requirements backward because the associated requirements documents were still being updated. According to the officials, 11 of the 21 could not be traced forward because these were implemented prior to 2006 and the related test information was not maintained by the program office but rather was at the development contractor’s site. They added that the remaining 10 either lacked test case information or test results. In February 2009, we used an updated DIO-provided requirements traceability matrix, a capabilities requirement document, and software requirements specification to generate another randomized sample of 60 detailed specifications. We then observed the development contractor’s demonstration of traceability using the contractor’s requirements management tool. Because of time constraints, this demonstration focused on 46 of the 60 requirements, and it showed that adequate traceability still did not exist. Specifically, 38 of the 46 could not be traced backward to higher-level requirements or forward to test cases. This means that about 83 percent of the DRRS specifications (95 percent degree of confidence of being between 72 and 91 percent) were not traceable. Of the 38, 14 did not trace because of incomplete traceability documentation; 5 due to inconsistent traceability documentation; 3 due to requirements not being resident in the tracking tool; and 16 due to no actual development work being started. In addition, none of the 46 requirements were traceable to the January 2009 concept of operations. According to contractor officials, this is because the newly developed capability requirements document is considered to be a superset of the concept of operations, and thus traceability to this new document is their focus. However, they were unable to demonstrate traceability to the requirements in either the capability requirements document or the concept of operations. Further, we also found numerous inconsistencies among the capabilities requirements document, software requirements specification, and the requirements traceability matrix. For example, 15 capabilities requirements listed on the traceability matrix were not listed in the capabilities requirements document, but were listed in the updated software requirements specification, dated February 2009. Further, one requirement listed in the traceability matrix was not listed in either of these documents. One possible reason for these inconsistencies is that the traceability matrix was prepared manually, rather than being automatically generated from the tool, which would increase the probability of these and other discrepancies caused by human error. Another reason cited by program officials is that test results that occurred prior to October 2006 had yet to be fully recorded in the contractor’s tracking tool. DIO and contractor officials attributed the absence of adequate requirements traceability to the ongoing instability in requirements and magnitude of the effort to update the chain of preexisting and new requirements documentation. They added that they expect traceability to improve as requirements become more stable and the documentation is updated. Regardless, the DIO has and continues to invest in the development of DRRS in the absence of requirements traceability. Without traceable requirements, the DIO does not have a sufficient basis for knowing that the scope of the design, development, and testing efforts will produce a system solution on time and on budget and that will meet users’ operational needs and perform as intended. As a result, the risk is significant that expensive and time-consuming system rework will be required. Adopting a disciplined process for reviewing and accepting changes to an approved and authoritative baseline set of requirements in light of the estimated costs, benefits, and risk of each proposed change is a recognized best practice. Elements of a disciplined process include: (1) formally documenting a requirements change process; (2) adopting objective criteria for considering proposed changes, such as estimated cost or schedule impact; and (3) rigorously adhering to the documented change control process. Since the inception of the program in 2002, DRRS has been developed and managed without a formally documented and approved process for managing changes to system requirements. Further, while requirements management and change control plans were developed in 2006 by the DRRS software development contractor, according to DIO officials, the plans were not adequate. For example, the plans did not detail how DRRS user requirements were collected or how objective factors, such as cost, impacted development decisions. To address these problems, the Joint Staff developed what it referred to as a conceptual requirements change control process in February 2008, which was to serve as a basis for the DIO to develop more detailed plans that could be implemented. Eleven months later, in January 2009, the DIO drafted more detailed plans—a DRRS requirements management plan and a DRRS requirements configuration management plan, the latter of which the DIO updated in March 2009. Specifically, the draft plans call for new DRRS requirements to be collected using an online tool and reviewed by the DIO to determine whether the requirement constitutes a major change to DRRS. Once approved, the DIO and the contractor are to provide the Battle Staff with a formatted report specifying the anticipated benefit of each new requirement and an initial analysis of the cost and performance impact. The Battle Staff then is to prioritize the requirement based on the DIO’s impact analysis. If the issue cannot be resolved by the Battle Staff, it is to be elevated to the senior oversight bodies (i.e., the General Officer’s Steering Committee and the DRRS Executive Committee). After a requirement has been approved, the software developer may prepare a more detailed “customer acceptance document” that analyzes the potential cost, schedule, and quality impact to DRRS objectives, which is then to be reviewed by the DIO at subsequent Change Control Board meetings. However, according to the user community and the DIO Director, the revised plans have not been submitted for review and approval to the DRRS community. Specifically, they stated that only a proposed process flow diagram was briefed at the Battle Staff, and according to them, the change control process was still being evaluated. Moreover, the DIO has yet to implement key aspects of its draft plans. For example, the DRRS Chief Engineer stated that until recently, the DIO had continued to accept changes to DRRS requirements that were submitted outside of the designated online tool. In addition, the reports that the Battle Staff are to use in making their requirement change determination do not include the anticipated benefit and estimated cost or schedule impact of new requirements. Rather, these reports only include the estimated number of hours necessary to complete work on a proposed requirement. Moreover, contractor officials and users from the Special Operations Command told us that cost or schedule impacts have rarely been discussed at the Battle Staff or Change Control Board meetings. Our analysis of minutes from change control meetings confirmed this. Furthermore, the DRRS Chief Engineer stated that the customer acceptance documents have only recently been used. Until the DIO effectively controls requirements changes, it increases the risk of needed DRRS capabilities taking longer and costing more to deliver than necessary. Effective system testing is essential to successfully developing and deploying systems like DRRS. According to DOD and other relevant guidance, system testing should be progressive, meaning that it should consist of a series of test events that first focus on the performance of individual system components, then on the performance of integrated system components, followed by system-level tests that focus on whether the system (or major system increments) are acceptable, interoperable with related systems, and operationally suitable to users. For this series of related test events to be conducted effectively, (1) each test event needs to be executed in accordance with well-defined test plans, (2) the results of each test event need to be captured and used to ensure that problems discovered are disclosed and corrected, and (3) all test events need to be governed by a well-defined test management structure. Despite acquiring and partially deploying a subset of DRRS increments, the DIO cannot demonstrate that it has adequately tested any of these system increments, referred to as system releases and subreleases. Specifically, (1) the test events for already acquired, as well as currently deployed and operating, DRRS releases and subreleases were not based on well-defined plans, and test events have not been fully executed in accordance with plans or executed in a verifiable manner, or both; (2) the results of executed test events have not been captured and used to ensure that problems discovered were disclosed to decision makers and ultimately corrected; and (3) the DIO has not established an effective test management structure to include, for example, a clear assignment of test management roles and responsibilities, or a reliable schedule of planned test events. Compounding this absence of test management structures and controls is the fact that the DIO has yet to define how the series of system releases and subreleases relate to its recent restructuring of DRRS increments into a series of 10 modules. Collectively, this means that it is unlikely that already developed and deployed DRRS capabilities can perform as intended and meet user operational needs. Equally doubtful are the chances that the DIO can adequately ensure that yet-to-be developed DRRS capabilities will meet expectations. Key tests required for already developed and partially fielded DRRS increments either did not have well-defined test plans, or these tests have yet to be conducted. According to program documentation, system releases and subreleases have been subjected to what are described as 30- day test cycles, during which: (1) a Software Test Plan is updated if applicable, (2) test procedures are developed and incorporated in the Software Test Description, (3) a series of developmental tests on each release/subrelease is performed, (4) weekly meetings are held to review software defects identified during testing, (5) final test results are summarized within the Software Test Report and Software Version Description, and (6) the release/subrelease is made available to users. However, the program office has yet to provide us with the developmental test plans and procedures for each release/subrelease that to-date has either been developed or fielded. Instead, it provided us with a Software Test Plan and two Software Test Descriptions that it said applied to two subreleases within release 4.0. However, available information indicates that DRRS subreleases total at least 63, which means that we have yet to receive the test plans and procedures for 61. Further, the test plan that we were provided is generic in nature, meaning that it was not customized to apply specifically to the two subreleases within Release 4.0. Moreover, the plan and procedures lack important elements specified in industry guidance. For example, the test plan does not include a schedule of activities to be performed or defined roles and responsibilities for performing them. Also, the test plan does not consistently include test entrance and exit criteria, a test defect management process, and metrics for measuring progress. Moreover, the DIO has yet to demonstrate that it has performed other key developmental and operational test events that are required before the software is fielded for operational use. According to DIO officials, developmental testing concludes only after system integration testing and system acceptance testing, respectively, are performed. Further, following developmental testing, the Joint Interoperability Test Command (JITC), which is a DOD independent test organization, is to conduct both interoperability and operational testing before the system is deployed and put into production (i.e., used operationally). Although increments of DRRS functionality have been put into production, the DIO has not performed system integration testing, system acceptance testing, or operational testing on any DRRS release or subrelease. Further, JITC documentation shows that while an interoperability test of an increment of DRRS functionality known as ESORTS was conducted, this test did not result in an interoperability certification. According to JITC and Joint Staff officials, this was because the DIO did not address JITC’s identified limitations to the program’s Information Support Plan, which identifies essential information-exchange sharing strategies between interdependent systems that are needed for interoperability certification. Without interoperability certification, the ability of the DRRS to exchange accurate and timely readiness data with other critical systems, such as the Joint Operation Planning and Execution System, cannot be ensured. Similarly, while DIO officials stated that acceptance testing has occurred for one increment of DRRS functionality known as SORTSREP, the DIO does not have either a finalized acceptance test plan or documented test results. Furthermore, the integrated master schedule (last updated in April 2009) shows that acceptance testing is not to occur until the July/August 2009 time frame, which is about 15 months later than originally envisioned. Moreover, this delay in acceptance testing has in turn delayed interoperability and operational testing by 16 months (May/June 2008 to September/November 2009), according to the latest schedule. Program officials attributed the delays to Marine Corps and Air Force concerns about the quality of SORTSREP. Until the DIO has effectively planned and executed the series of tests needed to demonstrate the readiness of DRRS increments to operate in a production environment, the risk of fielded system increments not performing as intended and requiring expensive rework to correct will be increased, and DOD will continue to experience delays in delivering mission-critical system capabilities to its readiness community. Available results of tests performed on already developed and at least partially deployed DRRS releases/subreleases show that the test results have not been effectively captured and analyzed, and have not been fully reported. Moreover, test results for other releases/subreleases do not exist, thus minimizing the value of any testing that has been performed. According to relevant guidance, effective system testing includes recording the results of executing each test procedure and test case as well as capturing, analyzing, correcting, and disclosing to decision makers problems found during testing (test defects). It also includes ensuring that test entry and exit criteria are met before beginning and ending, respectively, a given test event. The DIO does not have test results of all developed and tested DRRS releases and subreleases. Specifically, program officials provided us with the Software Test Reports and Software Version Descriptions that, based on program documentation, represent the full set of test results for three subreleases and a partial set of test results for 40 subreleases within releases 1.0, 3.0, and 4.0. However, as noted earlier, DRRS subreleases total at least 63, which means that test reports and results for at least 20 subreleases do not exist. Moreover, the test reports and version descriptions that we received do not consistently include key elements provided for in industry guidance, such as a documented assessment of system capabilities and limitations, entrance/exit criteria status, an assessment as to whether the applicable requirements/thresholds were met, and unresolved defects and applicable resolution plans. This information is important because it assists in determining and disclosing to decision makers current system performance and efforts needed to resolve known problems, and provides program officials with a needed basis for ensuring that a system increment is ready to move forward and be used. Without this information, the quality and readiness of a system is not clear. Furthermore, the DIO does not have detailed test defect documentation associated with all executed DRRS test events. According to relevant guidance, defect documentation should, among other things, identify each issue discovered, assign each a priority/criticality level, and provide for each a strategy for resolution or mitigation. In lieu of detailed test defect documentation, program officials referred us to the above-mentioned Software Version Descriptions, and stated that additional information is available in an automated tool, known as the ISI BugTracker, that it uses to capture, among other things, defect data. However, these documents do not include the above-cited defect information, and defect data for each test event do not exist in the ISI BugTracker. Compounding the absence and limitations of test results are weaknesses in the program office’s process for collecting such results during test execution. According to relevant guidance, test results are to be collected and stored according to defined procedures and placed under appropriate levels of control. Furthermore, these test results are to be reviewed against the source data to ensure that they are complete, accurate, and current. For DRRS, the program office is following a partially undocumented, manual process for collecting and storing test results and defects that involves a database and layers of documentation. As explained by program officials and documentation, the DIO initially documents defects and completed test case results manually on paper forms, and once the defect is approved by the test lead, it is input into a database. However, it does not have written procedures governing the entire process, and thus key controls, such as assigned levels of authority for database read/write access, are not clearly defined. Moreover, once the test results and defects are input into the database, traceability back to the original test data for data integrity checks cannot be established because the program office does not retain these original data sets. Program officials acknowledged these internal control weaknesses and stated that they intend to adopt a new test management tool that will allow them to capture in a single database test cases, test results, and test defects. Furthermore, the DIO’s process for analyzing and resolving test defects has limitations. According to relevant guidance and the draft SORTSREP Test and Evaluation Master Plan (TEMP), defects should be analyzed and prioritized. However, the program office has not established a standard definition for defect priority levels identified during testing. For example, the various release/subrelease test reports (dated through January 2009) prioritize defects on a scale of 1-3, where a priority 2 means critical but with a viable workaround. In contrast, the SORTSREP TEMP (dated January 2009) prioritizes defects on a scale of 1-5, where a priority 2 means an error that adversely affects the accomplishment of an operational or mission essential function in accordance with official requirements so as to degrade performance and for which no alternative work around solution exists. By not using standard priority definitions for categorizing defects, the program office cannot ensure that it has an accurate and useful understanding of the scope and magnitude of the problems it is facing at any given time, and it will not know if it is addressing the highest priority issues first. In addition, the DIO has not ensured that critical defects are corrected prior to concluding a given test event. According to relevant guidance and the draft SORTSREP TEMP, all critical and high defects should be resolved prior to the conclusion of a test event, and all test results should be reviewed for validity and completeness. However, the DRRS release/subrelease test reports show that the DIO concluded five test events even though each had at least 11 open critical defects (priority 1 defects with no workaround). Moreover, these numbers of open critical defects are potentially higher because they do not include defects for which a solution was identified but the solution failed during regression testing and do not include defects that were dismissed because the program official was unable to recreate it. Until the DIO adequately documents and reports the test results, and ensures that severe problems discovered are corrected prior to concluding a given test event, the probability of incomplete test coverage, and insufficient and invalid test results, is increased, thus unnecessarily increasing the risk of DRRS not meeting mission needs or otherwise not performing as intended. The DIO does not have an effective test management structure, to include a well-defined overall test management plan that clearly assigns test management roles and responsibilities, a designated test management lead and a supporting working group, and a reliable schedule of planned test events. According to relevant guidance, these aspects of test management are essential to adequately planning, executing, and reporting a program’s series of test events. Although the program has been underway for 8 years, it did not have an overarching DRRS TEMP until very recently (February 2009), and this plan is still in draft and has yet to be approved. Further, this draft TEMP does not clearly define DRRS test management roles and responsibilities, such as those of the test manager, and it does not include a reliable schedule of test events that reflect the program’s recent restructuring of its software releases/subreleases into 10 modules. According to DIO officials, they recently decided not to approve this overarching TEMP. Instead, they said that they now intend to have individual TEMPs for each of the recently defined 10 modules, and to have supporting test plans for each module’s respective developmental and operational test events. According to program documentation, three individual TEMPs are under development (i.e., SORTSREP tool and the Mission Readiness and Readiness Review modules). However, drafts of these TEMPs also do not clearly define test entrance and exit criteria, test funding requirements, an integrated test program schedule, and the respective test management roles and responsibilities. For example, while the draft SORTSREP TEMP identifies the roles and responsibilities of some players, such as the test manager, the personnel or organization that is to be responsible is not always identified. In addition, while the various players in the user community are identified (i.e., military services, combatant commands), their associated roles or responsibilities are not. Furthermore, the DIO has yet to designate a test management lead and establish an effective test management working group. According to relevant guidance, test management responsibility and authority should be assigned to an individual, and this individual should be supported by a working integrated product team that includes program office and operational testing representatives. Among other things, the working integrated product team is to develop an overall system test strategy. However, DIO officials told us that the test manager position has been vacant, and this position is now being temporarily filled by the program’s chief engineer, who is a contractor. Furthermore, although DRRS system development began prior to 2004, a charter for a test and evaluation working integrated product team was not issued until February 2009. According to DIO officials, the delay in establishing the team has not had any impact because of corresponding delays in finalizing the program’s overall test strategy. However, this statement is not consistent with the Defense Acquisition Guidebook, which states that two of the key products of the working integrated product team are the program’s test strategy and TEMP. Further, JITC officials stated that the lack of a test manager and an active test and evaluation working integrated product team have reduced the effectiveness of DRRS testing activities. As a result, they stated that they have had to compensate by conducting individual meetings with the user community to discuss and receive documentation to support their operational and interoperability test planning efforts. Moreover, the DIO has yet to establish a reliable schedule of planned test events. For example, the schedule in the TEMPs is not consistent with either the integrated master schedule or the developmental test plans. Specifically, the draft SORTSREP TEMP (last updated in January 2009) identifies SORTSREP developmental testing occurring through January 2009 and ending in early February 2009, while the integrated master schedule (last updated in April 2009) shows SORTSREP development testing as occurring in the July/August 2009 time frame. In addition, while program officials said that development testing for SORTSREP has occurred, the associated development test plans (e.g., system integration and system acceptance test plans) had no established dates for test execution, and are still in draft. As another example, a module referred to as “Mission Readiness” had no established dates for test execution in its TEMP, and while program documentation indicates that this module completed development testing in December 2008, the associated development test plans (e.g., system integration and system acceptance test plans) do not exist. In addition, the DIO has yet to define in its draft TEMPs how the development and testing to date of at least 63 subreleases relate to the development and testing of the recently established 10 system modules. According to Joint Staff and JITC officials, they do not know how the releases/subreleases relate to the modules, and attributed this to a lack of an approved description for each module that includes what functionality each is intended to provide. Furthermore, the high-level schedule in the TEMP does not describe what test events for the DRRS releases/subreleases that have already been developed and deployed relate to the development test efforts planned for the respective modules. These problems in linking release/subrelease test events to module test events limit the DIO and JITC in leveraging the testing already completed, which in turn will impact the program’s ability to meet cost, schedule, and performance expectations. Collectively, the weaknesses in this program’s test management structure increase the chances that the deployed system will not meet certification and operational requirements, and will not perform as intended. The success of any program depends in part on having a reliable schedule that defines, among other things, when work activities will occur, how long they will take, and how they are related to one another. As such, the schedule not only provides a road map for the systematic execution of a program, but also provides the means by which to gauge progress, identify and address potential problems, and promote accountability. From its inception in 2002 until November 2008, the DIO did not have an integrated master schedule. Moreover, the only milestone that we could identify for the program prior to November 2008 was the date that DRRS was to achieve full operational capability, which was originally estimated to occur in fiscal year 2007, but later slipped to fiscal year 2008 and then fiscal year 2011, and is now fiscal year 2014—a 7-year delay. In addition, the DRRS integrated master schedule that was developed in November 2008, and was updated in January 2009 and again in April 2009 to address limitations that we identified and shared with the program office, is still not reliable. Specifically, our research has identified nine practices associated with developing and maintaining a reliable schedule. These practices are (1) capturing all key activities, (2) sequencing all key activities, (3) assigning resources to all key activities, (4) integrating all key activities horizontally and vertically, (5) establishing the duration of all key activities, (6) establishing the critical path for all key activities, (7) identifying float between key activities, (8) conducting a schedule risk analysis, and (9) updating the schedule using logic and durations to determine the dates for all key activities. However, the program’s latest integrated master schedule does not address three of the practices and only partially addresses the remaining six. For example, the schedule does not establish a critical path for all key activities, include a schedule risk analysis, and it is not being updated using logic and durations to determine the dates for all key activities. Further, it does not fully capture, sequence, and establish the duration of all key work activities; fully assign resources to all key work activities; fully integrate all of these activities horizontally and vertically; and fully identify the amount of float—the time that a predecessor activity can slip before the delay affects successor activities— between these activities. These practices are fundamental to producing a sufficiently reliable schedule baseline that can be used to measure progress and forecast slippages. (See table 3 for the results of our analyses relative to each of the nine practices.) The limitations in the program’s latest integrated master schedule, coupled with the program’s 7-year slippage to date, make it likely that DRRS will incur further delays. Compounding these limitations is the considerable concurrency in the key activities and events in the schedule associated with the 10 recently identified system modules (see fig. 2). For example, in 2010 alone, the program office plans to complete development testing on 2 modules and operational testing on 3 modules, while also reaching initial operational capability on 3 modules and full operational capability on 2 modules. By way of comparison, the program office had almost no concurrency across a considerably more modest set of activities and events over the last 5 years, but nevertheless has fallen 7 years behind schedule. As previously reported, such significant overlap and concurrency among major program activities can create contention for limited resources and thus introduce considerable cost, schedule, and performance risks. In addition, the schedule remains unstable as evidenced by the degree of change it has experienced in just the past few months. For example, the January 2009 schedule had a full operational capability milestone of October 2011. By contrast, the April 2009 schedule has a December 2013 milestone (see fig. 3 below). Moreover, some milestones are now to occur much earlier than they were a few months ago. For example, the January 2009 schedule shows initial operational capability for “readiness reviews” to be June 2010. However, the April 2009 schedule shows that that this milestone was attained in August 2007. Overall, multiple milestones for four modules were extended by at least 1 year, including two milestones that were extended by more than 2 years. Such change in the schedule in but a few months suggests a large degree of uncertainty, and illustrates the ut a few months suggests a large degree of uncertainty, and illustrates the importance of ensuring that the schedule is developed in accordance with importance of e nsuring that the schedule is developed in accordance with best practices. best practices. As we have previously reported, effective human capital management is an essential ingredient to achieving successful program outcomes. Among other things, effective human capital management involves a number of actions to proactively understand and address any shortfalls in meeting a program’s current and future workforce needs. These include an assessment of the core competencies and essential knowledge, skills, and abilities needed to perform key program management functions, an inventory of the program’s existing workforce capabilities, and an analysis of the gap between the assessed needs and the existing capabilities. Moreover, they include explicitly defined strategies and actions for filling identified gaps, such as strategies for hiring new staff, training existing staff, and contracting for support services. The DIO is responsible for performing a number of fundamental DRRS program management functions. For example, it is responsible for acquisition planning, performance management, requirements development and management, test management, contractor tracking and oversight, quality management, and configuration management. To effectively perform such functions, program offices, such as the DIO, need to have not only well-defined policies and procedures and support tools for each of these functions, but also sufficient human capital to implement the processes and use the tools throughout the program’s life cycle. Without sufficient human capital, it is unlikely that a program office can effectively perform its basic program management functions, which in turn increases the risk that the program will not deliver promised system capabilities and benefits on time and on budget. The DIO does not currently have adequate staff to fulfill its system acquisition and deployment responsibilities. In particular, the DIO is staffed with a single full-time government employee—the DIO Director. All other key program office functions are staffed by either contractor staff or staff temporarily detailed, on an as-needed basis, from other DOD organizations (referred to as “matrixed” staff). As a result, program management positions that the DIO itself has identified as critical to the program’s success, such as configuration manager and security manager, are being staffed by contractors. Moreover, these contractor staff report to program management positions also staffed by contractors. Other key positions, such as those for performing acquisition management, requirements development and management, and performance management, have not even been established within the DIO. Furthermore, key positions, such as test manager, are vacant. These human capital limitations were acknowledged by the DRRS Executive Committee in November 2008. According to DIO and contractor officials, they recognize that additional program management staffing is needed. They also stated that while DRRS has been endorsed by USD (P&R) leadership and received funding support, past requests for additional staff have not been approved by USD (P&R) due to other competing demands for staffing. Further, DIO officials stated that the requests for staff were not based on a strategic gap analysis of its workforce needs and existing capabilities. Specifically, the program has not assessed its human capital needs and the gap between these needs and its onboard workforce capabilities. Until the program office adopts a strategic, proactive approach to managing its human capital needs, it is unlikely that it will have an adequate basis for obtaining the people it needs to effectively and efficiently manage DRRS. In addition to the contacts named above, key contributors to this report were Michael Ferren (Assistant Director), Neelaxi Lakhmani (Assistant Director), April Baugh, Mathew Butler, Richard J. Hagerman, Nicole Harms, James Houtz, John Lee, Stephen Pruitt, Terry Richardson, Karen Richey, Karl Seifert, and Kristy Williams.
The Department of Defense (DOD) reports data about the operational readiness of its forces. In 1999, Congress directed DOD to create a comprehensive readiness system with timely, objective, and accurate data. In response, DOD started to develop the Defense Readiness Reporting System (DRRS). After 7 years, DOD has incrementally fielded some capabilities, and, through fiscal year 2008, reported obligating about $96.5 million. GAO was asked to review the program including the extent that DOD has (1) effectively managed and overseen DRRS acquisition and deployment and (2) implemented features of DRRS consistent with legislative requirements and DOD guidance. GAO compared DRRS acquisition disciplines, such as requirements development, test management, and DRRS oversight activities, to DOD and related guidance, and reviewed the system's current and intended capabilities relative to legislative requirements and DOD guidance. We did not evaluate DOD's overall ability to assess force readiness or the extent that readiness data reflects capabilities, vulnerabilities, or performance issues. DOD has not effectively managed and overseen the DRRS acquisition and deployment, in large part because of the absence of rigorous and disciplined acquisition management controls and an effective governance and accountability structure for the program. In particular, system requirements have not been effectively developed and managed. For example, user participation and input in the requirements development process was, until recently, limited, and requirements have been experiencing considerable change, are not yet stable, and have not been effectively controlled. In addition, system testing has not been adequately performed and managed. For example, test events for already acquired system increments, as well as currently deployed and operating increments, were not based on well-defined plans or structures, and test events have not been executed in accordance with plans or in a verifiable manner. Moreover, DRRS has not been guided by a reliable schedule of work to be performed and key activities to occur. These program management weaknesses can, in part, be attributed to long-standing limitations in program office staffing and program oversight and accountability. Despite being a DOD-wide program, until April, 2009 DRRS was not accountable to a DOD-wide oversight body, and it was not subject to DOD's established mechanisms and processes for overseeing business systems. Collectively, these acquisition management weaknesses have contributed to a program that has fallen well short of expectations, and is unlikely to meet future expectations. DOD has implemented DRRS features that allow users to report certain mission capabilities that were not reported under the legacy system, but these features are not fully consistent with legislative requirements and DOD guidance; and DOD has not yet implemented other features. The geographic combatant commands are currently reporting their capabilities to execute most of their operations and major war plans in DRRS, and DOD is reporting this additional information to Congress. However, because DRRS does not yet fully interface with legacy systems to allow single reporting of readiness data, the military services have not consistently used DRRS's enhanced capability reporting features. For example, as of May 2009, the Army and Navy had developed interfaces for reporting in DRRS, while the Marine Corps required units to only report in their legacy system. Recently, the Marine Corps also began developing an interface and has done limited reporting in DRRS. In addition, DRRS has not fully addressed the challenges with metrics that led Congress to require a new readiness reporting system. DRRS metrics are less objective and precise, and no more timely than the legacy system metrics. Users have also noted that DRRS lacks some of the current and historical data and connectivity with DOD's planning systems necessary to manage and deploy forces. Until these limitations are fully addressed, DRRS will not have the full complement of features necessary to meet legislative and DOD requirements, and users will need to rely on legacy reporting systems to support mission-critical decisions.
Medicare’s fee-for-service health care program consists of two parts—A and B. Part A covers inpatient hospital, skilled nursing facility, hospice, and certain home health services. Part B covers physician services, diagnostic tests, and related services and supplies. Medicare providers, on behalf of their beneficiaries, can appeal denied claims for services. Currently, there are four levels of administrative appeal (see fig. 1). Appeals for denied Part A and Part B Medicare claims currently follow similar, but not identical, paths. At the first level of appeal, the process is the same for both Part A and Part B denials. The Medicare claims administration contractor reexamines the claim along with any additional documentation provided by the appellant. At this level, in general, only written materials are reviewed; however, Part B appellants may request telephone hearings. If the appellant of a Part B claim is dissatisfied with a decision at the first level, he may proceed to the second level of review, conducted by the Medicare contractor. At this stage, the file is once again reviewed, including any additional documentation submitted by the appellant, and a hearing may be conducted. However, there is no comparable second level of review by Medicare contractors of Part A appeals. Appellants of both Part A and Part B denied claims who remain dissatisfied with the decisions rendered by Medicare contractors may appeal to the third level—SSA’s OHA—where appeals are adjudicated by ALJs. At this level, appellants have the option of attending a hearing conducted by telephone, by videoconference, or in person. OHA’s ALJs adjudicated the appeal of about 122,000 Medicare claims in fiscal year 2003. Should appellants also be dissatisfied with the ALJ’s decision, they can appeal to the MAC. The MAC’s adjudication is the fourth and final level of the administrative appeals process. It is based on a review of OHA’s decision; the MAC does not conduct hearings. Appellants who have had their appeals denied at all levels of the administrative appeals process have the option of appealing to a federal district court. In addition to preparing for the transition of SSA’s appeals workload, HHS continues to plan numerous administrative and structural changes required by the Medicare, Medicaid, and SCHIP Benefits Improvement and Protection Act of 2000 (BIPA). Most of these changes have not yet been implemented, including the finalization of new regulations. Among other things, BIPA mandated shorter time frames; expedited procedures for processing Medicare appeals at all levels; and the establishment of new contractors, known as qualified independent contractors (QIC). Contracts for QICs have not yet been awarded, but once QICs become operational, they will provide a new second level of adjudication for Part A appeals and replace the existing second level of the appeals process for Part B claims. As noted earlier, figure 1 shows the appeals bodies that are currently involved in Medicare appeals. It also shows those that will be responsible for resolving Medicare appeals once BIPA has been fully implemented and OHA’s workload has been transferred to HHS. The transfer of the appeals workload from SSA to HHS is not a new proposal. As early as 1988, while SSA was still a part of HHS, discussion regarding the transfer of this function was already under way and, throughout the years, the development of potential transfer plans and strategies has continued. Discussions were active as late as 2003, culminating in SSA’s decision not to seek funding for Medicare appeals in its fiscal year 2004 budget request. Instead, HHS requested and received funding to cover the cost in its fiscal year 2004 budget. Under a reimbursable agreement with CMS, SSA will continue to hear Medicare appeals until September 30, 2005. In response to MMA’s mandate to transfer the workload, SSA and HHS created an interagency team that drafted the required transfer plan. The team has continued to meet to deliberate various aspects of the plan and discuss its implementation. Representatives from both agencies have stressed their commitment to ensuring a successful transfer of the Medicare appeals process from SSA to HHS. The plan indicates that HHS will begin to exercise adjudicative authority for Part A and Part B ALJ appeals that are received on or after July 1, 2005. The plan notes that this schedule is being adopted so SSA may concentrate on reducing its pending workload between July 1, 2005 and September 30, 2005 and to permit HHS to prepare for and begin conducting ALJ hearings. According to MMA, the plan is required to provide information regarding 13 key elements. For purposes of this report, we have grouped these elements into six broader categories—timetable, scope of work, adjudication guidance, operational matters, staffing, and oversight. Table 1 lists these six categories and related elements and identifies the act’s requirements for each element. We found that HHS’s and SSA’s plan is too vague to serve as a blueprint for the transfer’s implementation. We evaluated the plan’s 13 elements, mandated by MMA, and grouped them into six categories to evaluate whether the plan was sufficient to ensure a smooth and timely transition. We found that in virtually every category, the information contained in the plan, as well as documentation provided to us in the course of our work, lacked sufficient detail to ensure that HHS will achieve a smooth and timely transfer. Further, the lack of detail and the fact that some aspects of the plan have not yet been finalized raise serious questions as to whether HHS and SSA have considered the breadth of challenges inherent in the transfer. Our review suggests that the plan’s deficiencies, if not corrected, may compromise service to appellants. (App. I contains a summary evaluation of our analysis of the plan.) Transferring SSA’s annual workload of appeals—about 122,000 claims in fiscal year 2003—to HHS requires the development of many interrelated components. For example, deciding where ALJs should be geographically located affects hiring and training plans and the need for office space. Because the transfer date is approaching, many of these activities must be completed simultaneously so that HHS can ensure that service to appellants will not be disrupted. With the exception of the development of a case tracking system, the plan contains few milestones for completing tasks. Some of the few dates that are mentioned merely reflect the MMA- imposed deadlines between July 1, 2005, and October 1, 2005, without noting interim milestones. For example, there are no milestone dates associated with the vital tasks of producing training materials for newly hired ALJs or locating office space for ALJs to conduct hearings. Other elements of the plan are addressed without ever mentioning dates, such as the ensuring of independence for ALJs and the establishment of performance standards for them. Moreover, the plan does not assign responsibility to any group, office, or individual to perform the necessary tasks to execute key elements of the plan. In our view, the level of complexity associated with the transfer would warrant the development of a detailed schematic outlining all of the steps that need to be taken, as well as the corresponding dates for completing these steps, to ensure that the plan could be successfully executed. In response to our inquiries, the transfer team reported that it did not prepare a project plan nor could it supply information about ambiguous or absent milestones. Without specific milestones, HHS does not have a management tool for determining whether the general dates contained in the plan can be met as scheduled. The transfer plan also lacks a contingency component, to be used in the event that something prevents the transfer from occurring as scheduled. Given the importance of having a system in place for adjudicating appeals, we view this as a considerable oversight. Failure to successfully implement even one element of the plan, such as the development of a geographic distribution plan to ensure appellants appropriate access to ALJs throughout the country, could derail the transfer. Although this is a critical element of the plan, there is no contingency provision. HHS officials maintained that they are confident the transfer will be executed in a timely manner, eliminating the need for a contingency plan. However, they indicated that if necessary, they could renew their reimbursable agreement with SSA to adjudicate Medicare appeals for another year. In contrast, SSA officials emphasized to us that responsibility for all Medicare appeals will pass, under MMA, to HHS on October 1, 2005. According to them, it is not a given that SSA will have the capability, or even the legal authority as of that date, to adjudicate Medicare appeals under any arrangement with HHS. In our view, this is the type of issue a contingency plan could address. In agency comments, both SSA and HHS reported that they have identified a mechanism for HHS to continue to use SSA ALJs to adjudicate Medicare appeals after the date of the transfer, if necessary. However, neither agency provided details concerning this mechanism in their comments. As a result, we are unable to evaluate it. Understanding the size of the appeals workload is a critical first step in planning for the transfer because other decisions, such as the number of ALJs needed to complete the adjudications, are predicated on it. We found that the transfer plan does not present a thorough analysis of the expected workload and the costs to transfer the function and adjudicate appeals. Further, the plan is based on unreliable staff and cost data, which undermine the validity of the plan’s projections. MMA mandated that certain external factors be incorporated into the plan’s analyses, such as changes in the number of appeals and the effect of statutory changes. However, the plan did not contain a detailed discussion of the implications of these factors on workload and costs. HHS’s plan to initially hire 50 ALJs is based on information from OHA that it uses an average of 46 ALJs to adjudicate Medicare appeals each month. However, SSA does not have a dedicated corps of ALJs who are exclusively devoted to hearing Medicare appeals, and based its estimate on the average amount of time ALJs spend doing Medicare work. OHA has no formal timekeeping system for its ALJs, and instead, the chief of each local hearing office estimates the amount of time ALJs spend each month adjudicating Medicare appeals. Individual ALJs do not provide their own time estimates, and the information supplied by each local office is not otherwise verified. The transfer team did not independently determine the accuracy of this information, despite the plan’s heavy reliance on it. Despite the fact that MMA requires the plan to address the number of ALJs and support staff required to hear Medicare appeals now and in the future, the plan limits itself to the present. It does not specifically address how the implementation of recent statutory changes to Medicare may affect the appeals workload and increase the need for personnel. For example, the plan does not address the potential impact of additional appeals resulting from MMA’s new prescription drug benefit. Further, the largest impact may result from the implementation of BIPA’s changes, which will not become effective until the QICs are fully established—now slated for October 2005. BIPA’s changes to the appeals process were to apply to appeals of claims denied on or after October 1, 2002. However, CMS issued a ruling on October 7, 2002, that held that the majority of BIPA’s provisions apply only to appeals adjudicated by QICs. Because QICs are not yet operational, the appeals process is currently operating in accordance with regulations established prior to BIPA’s passage. The establishment of the QICs and new regulations implementing BIPA’s provisions are now expected to occur simultaneously with the plan to transfer the OHA workload. As a result, it will be HHS’s ALJs who will be expected to comply with BIPA’s shorter time frames for processing appeals. While their OHA colleagues, who faced no deadlines, took an average of 327 days to complete a Medicare appeal in fiscal year 2003, HHS ALJs will be expected to render decisions much more quickly—within 90 days. The plan is silent as to how HHS’s new corps of ALJs will meet BIPA’s time frames by completing the same workload in less than one- third the time taken by OHA. In addition, the plan states that efficiencies will be gained from hiring ALJs and staff who are specialized in Medicare, increasing reliance on teleconferences and videoconferences to minimize travel, and improving the management of appeals cases. While efficiencies may be gained in the long term, we found that the plan did not provide a sound quantitative basis to support HHS’s claim that efficiencies would mitigate demand for more resources in the first year of operation. Further, the plan does not contain a contingency provision to address the possibility that greater efficiencies may not be achieved. In our view, this is significant as, in the short term, HHS may experience a period of diminished efficiency while new staff—both ALJs and support personnel—take time to attend training, develop expertise with Medicare issues, and gain familiarity with their new organization and infrastructure. Element 3: Cost Projections and Financing The plan notes that $129 million was requested for fiscal year 2005 for Medicare appeals reforms, which includes start-up funds for HHS’s ALJ unit; funds to reimburse SSA for continuing to process Medicare appeals; and funds to implement other BIPA reforms, as amended by MMA. In fiscal year 2004, $50 million was intended for processing appeals submitted to ALJs. HHS officials told us that they anticipate requiring the same amount for fiscal year 2005. The $50 million for processing appeals is based upon SSA’s agreement to adjudicate approximately 50,000 cases, at a cost of $1,000 each, in fiscal year 2004. We learned that HHS expects to use $8 million in fiscal year 2005 to meet start-up costs for the transfer of ALJ functions. Although the plan notes that start-up funds will allow HHS to begin hiring attorneys and other staff, it makes no mention of office space, equipment, and other infrastructure development costs. Most of the remaining balance is expected to be used for establishing QICs. We also noted that the plan does not provide cost projections for years subsequent to 2005, as required by MMA. Office of Management and Budget officials, who are responsible for approving HHS’s requests, and HHS officials could not provide specific budgetary details related to the plan. Moreover, HHS’s estimate of the costs of adjudicating Medicare appeals in fiscal year 2005 is based on its assumption that those costs will mirror what it is paying SSA to resolve appeals this fiscal year under its reimbursable agreement. However, OHA reported that the actual costs of adjudicating these appeals exceeded the amount it was being paid. After adjusting for inflation and overhead, OHA officials estimated that their actual cost in fiscal year 2003—the most current data available—was closer to $1,300 per case. MMA allows for increased financial support to ensure that the HHS ALJ unit meets its workload demands. However, should additional funds be needed, the plan does not include a contingency provision that defines criteria and other relevant measures to justify future requests for increased financial support. The timely issuance of regulations governing the appeals process will have a significant effect on the implementation of the transfer plan. Without regulations implementing the provisions of BIPA, and more recently MMA, the appeals process will lack guidance critical for its operation. Nonetheless, the plan does not address time frames for establishing these regulations nor does it discuss what actions will be taken should the regulations not be finalized by the time of the transfer. It appears, however, that no regulations will be needed regarding the use of MAC decisions as binding precedents on lower levels of the appeals process, including ALJs, at least in the near future. The plan has addressed this matter by retaining current policy, which allows ALJs and the other appeals bodies to consider these decisions as guidance, but does not require them to be viewed as binding precedents. However, the plan suggests that this decision may only be for the short term. To implement MMA’s provisions to transfer SSA’s workload to HHS, regulations will need to be drafted and finalized by October 1, 2005—the date that the transfer is required to be complete. As required by MMA, the plan acknowledges the need for specific regulations and mentions that regulations will be developed in several areas, such as providing appellants the opportunity to file appeals electronically and a reliance on videoconferences in lieu of in-person hearings. However, the plan is silent on the anticipated time frames for issuing these regulations and does not include interim dates to ensure they are finalized on time. In the absence of regulations, it is not clear how appellants will be assured of having sufficient access to ALJs. For example, without regulations it is uncertain what forum will be used to provide information to beneficiaries and providers, how access to this information will be provided, and what will be used as the basis for this information. The plan also does not address whether there will be a need to issue additional regulations on other aspects of the transfer, such as procedures for hiring ALJs, initiating a training program, developing ALJ performance standards, and identifying opportunities for HHS and SSA to share resources. Given the ambiguity in the plan, it is unclear how the required transfer of the appeals function to HHS could proceed on a timely basis. Moreover, although the plan recognizes that regulations implementing most of BIPA’s provisions have not been finalized, it does not address the impact of this situation. This is particularly troubling because, according to CMS, the implementation of QICs will be delayed if final regulations are not issued by November 2004. As a result, HHS may be compelled to develop and operate two separate processing systems—one that follows current rules, and another that complies with BIPA’s mandated deadlines and other requirements. Element 5: Feasibility of Precedential Authority In response to an MMA requirement to address precedential authority, the plan makes clear that MAC decisions will not be binding on lower levels of the appeals process, including ALJs. The plan acknowledges that precedential authority may contribute to more consistent decisions by ALJs. However, it concludes that the risk of an inaccurate or incomplete interpretation of an agency ruling could result in greater problems when the same issue is raised more clearly or in different circumstances. The plan therefore concludes that the risks inherent in giving the MAC precedential authority outweigh the benefits. The plan also suggests that high-level decisions could serve as guidance to the lower levels in the process, without having the full force of precedent. Although the plan indicates that HHS will reevaluate its stand on the merits of granting binding precedential authority to MAC decisions, it does not specify what might contribute to a change in its current position on the issue. Absent or insufficient details and vague descriptions regarding critical operational aspects of the transfer prevented us from fully evaluating these components and, in our view, put the successful implementation of the transfer at risk. The lack of a geographic distribution plan for HHS ALJs alone threatens to undermine efforts to accomplish the transfer in a timely manner. Beyond this, the lack of specific plans to ensure access to ALJs nationwide and to share resources with SSA to enhance appellant access may well compromise service to appellants. Finally, although the plan outlines important details concerning the establishment of a new case tracking system, its implementation is linked to the establishment of the QICs in July 2005, making a current evaluation impractical. Element 6: Geographic Distribution While the plan addresses the topic of the future geographic distribution of ALJs, it does not include the steps to be taken to ensure that appellants across the country will have timely access to such judges, as MMA requires. Rather than detailing a specific geographic distribution strategy, the transfer plan indicates that a central hearing support office will be located in the Baltimore, Maryland and Washington, D.C., metropolitan area and that a field structure will be established. Because many issues relating to the successful implementation of the transfer, such as hiring staff, hinge on the strategy for distributing ALJs throughout the country, its absence from the plan is a serious shortcoming. The plan notes that HHS will develop a process for determining the size and location of the field structure and will reach a final decision about the geographic distribution of ALJs by the end of calendar year 2004. However, the plan does not include key information that would enable us to analyze this critical component of the plan, such as the anticipated number of field office locations or the size and resources required for each office. The plan also does not supply information about the number of judges to be housed in each location or details concerning whether certain case processing activities—such as case receipt, research, and preparation for hearings—will be centralized or regionally based. Element 7: Access to ALJs MMA required the plan to address the feasibility of electronically filing appeals to the ALJ level. CMS is developing a beneficiary Web site, which, in its pilot at one contractor, allows beneficiaries Internet access to claims information. The plan anticipates that HHS will use this Web site to allow electronic appeals submissions. Although the plan does not discuss when this feature will be available, a CMS official estimated it would not be ready for testing for at least 2 years. HHS is also exploring the possible development of another Internet-based filing system that does not depend on CMS’s beneficiary Web site. MMA also required that the plan address the feasibility of using video- and teleconferencing to provide access to ALJs. Although the plan identifies a variety of sources for providing ALJs and appellants with videoconference access—including SSA, private contractors, and other government agencies—no analysis has been conducted to determine where videoconference sites are needed, where such sites are actually available, and the costs of such services. Moreover, SSA does not expect appellants to travel more than 75 miles to attend hearings, but the plan does not address HHS’s expectations in this regard. Appellants in remote areas of the country may be unlikely to find access to videoconference facilities within such a radius. In regard to teleconferences, the plan notes that a small number of appeals are currently conducted in this manner, but more commonly, teleconferences are used to obtain the testimony of expert witnesses. The plan refers to HHS’s willingness to expand its use of teleconferences, where appropriate, but does not define the conditions that would constitute “appropriate” use. Moreover, no analysis has been done to determine what proportion of appellants would actually be interested in having their appeals heard using videoconferences or teleconferences. Several ALJs told us that beneficiaries are often uncomfortable using videoconference facilities and prefer to have their cases heard face-to-face. While appellants have the right to request in-person hearings, the plan does not include an assessment of HHS’s capacity to conduct such hearings. There is no contingency provision to facilitate in-person hearings, should this be appellants’ preference. Further, as a result of changes to the appeals process due to BIPA, hearings by ALJs will provide an appellant’s sole opportunity to be heard in person, making access to them all the more important. Although OHA has been able to accommodate appellants through its network of 10 regional offices and an additional 143 field offices with hearing rooms throughout the United States and Puerto Rico, HHS currently has no available capacity to hear Medicare claims appeals. The plan does not address MMA’s mandate that it include steps for SSA and HHS to share office space, support staff, and other resources. Moreover, it does not include a contingency element should HHS be unable to use SSA resources to complete the Medicare workload. Instead, the plan focuses exclusively on sharing videoconference facilities, but the arrangements for sharing this resource are ambiguous. For example, while the plan notes that SSA is willing to share its videoconference sites, it also makes clear that SSA will have priority over the use of the equipment and does not include a protocol for ensuring that HHS will have sufficient and timely access. One SSA official told us the agency anticipates that it will have excess videoconference capacity once it expands its videoconference system. Currently, SSA has 148 videoconference units available but plans to increase this number to 351 units at 302 different sites by 2006. However, the agency has not yet performed an analysis to establish where and when excess capacity is anticipated. Because SSA ALJs schedule their hearings well in advance, HHS ALJs may have difficulty scheduling videoconferences in their localities to meet their 90-day BIPA-mandated deadline. Moreover, even with access to 302 facilities, depending on the location of available equipment, HHS ALJs may have to travel to videoconferences, which could be as time-consuming as traveling to in- person hearings. Element 9: Case Tracking The plan addresses the mandate’s directive to develop a unified case tracking system for all appeals levels, and outlines a new tool designed to fulfill the mandate’s requirements—the Medicare Appeals System (MAS). We found that the design and approach to implementing MAS appear reasonable. However, the plan was drafted with the expectation that MAS would be first used by QICs in the summer of 2004. The delay in implementing QICs, which are now not expected to become fully operational until October 2005, has reduced the time available for live testing of the system to determine if it will perform as expected. Currently, HHS is unable to conduct such testing. This delay may leave insufficient time to fully test MAS and make necessary adjustments to the system, but the plan leaves no margin for such an occurrence. However, should MAS be unavailable at the time of the transfer, CMS has an alternate case tracking system that could be temporarily deployed until the new system becomes operational. The plan lacks a detailed staffing strategy to ensure that HHS can attract both ALJs and support staff by the time of the transfer. MMA required the plan to include steps to hire ALJs, taking into account their expertise in Medicare, and to address training in Medicare laws and regulations. As required by MMA, the plan addresses steps that should be taken to hire ALJs and support staff. It outlines HHS’s intention to hire ALJs from various sources, including OPM’s register of qualified ALJs, the list of retired ALJs who have expressed interest in returning to work and are available for temporary reappointment, and ALJs currently employed and adjudicating administrative appeals at other agencies. However, it does not discuss how HHS will be able to ensure that it can attract the 50 ALJs it plans to hire. Moreover, we expect that it may be difficult for HHS to identify and hire 50 ALJs with Medicare knowledge. For example, OPM’s register, the largest source of new ALJs with 1,300 potential candidates, does not include information indicating whether candidates have Medicare expertise. Similarly, HHS cannot tell which of the 110 retired ALJs on the register of those interested in returning to work have Medicare expertise. And, although ALJs already employed at other agencies may be interested in seeking employment at HHS, few of them are likely to have knowledge of Medicare rules. Given that the majority of ALJs currently employed by SSA focus primarily on disability appeals, few of them are likely to have significant Medicare expertise. HHS’s plan to hire ALJs and other professional and administrative staff in a manner that ensures an appropriate geographic distribution is a major staffing consideration. However, the plan does not address how HHS will incorporate this feature into its hiring plans. Given the lack of such a geographic distribution plan, there is no way for ALJ candidates to know where new positions will be located—which may have a great bearing on their interest. As a result, even the OHA ALJs with Medicare expertise may not be interested in transferring to HHS, if this would require them to relocate. The plan lacks other details concerning HHS’s hiring plans. For example, it is not explicit about whether HHS will hire the 50 ALJs and 200 support staff all at once, or if it intends to conduct several rounds of hiring and training. The plan does not outline who is to be involved in the hiring process and, as of July 2004, HHS had not decided whether a chief judge might be hired first to participate in the hiring of the ALJs and support staff. Finally, the plan does not acknowledge the possibility that HHS may be unable to hire all needed staff by the time of the transfer. By not recognizing this possibility, the plan misses the opportunity to develop critical contingency arrangements. As required by the mandate, the plan describes HHS’s plans to develop a training strategy but, nonetheless, leaves key questions unanswered. Although the plan establishes four broad categories for short-term training, it does not include substantive information on the training’s content. It also lacks other critical information, such as a detailed description of its plans to provide initial training for HHS’s ALJs. While OHA’s ALJ training of new hires lasts 5 weeks, the plan does not describe the duration of HHS’s planned training or the depth of material to be covered. It also does not specify who will be responsible for developing the training curriculum and course materials or presenting the training to new ALJs. The plan mentions that HHS is also developing a long-term training strategy, but there are no details for providing ongoing training and refresher classes to ALJs in future years. Even OHA ALJs with Medicare knowledge may need additional training, as some indicated to us that their understanding of the program’s rules is not current. In addition to our concerns regarding the content of this plan element, the lack of a detailed schedule for developing and presenting the new training program raises concerns about HHS’s ability to have an adequately prepared staff to adhere to its plans to begin processing appeals by July 1, 2005. The only date included in HHS’s training schedule indicates that both hiring and training will begin in the second quarter of calendar year 2005— at most, 3 months before the plan anticipates HHS ALJs will begin hearing appeals. This poses a challenging time frame for HHS, especially if its training will mirror OHA’s 5-week program. Given the plan’s timeline, there is little opportunity to pursue alternate training arrangements, should delays occur. Although the plan recognizes the importance of ALJ decisional independence—an element critical to the integrity of the appeals process—it does not specify, organizationally, where ALJs will be housed within HHS nor does it discuss the safeguards that will be put in place to ensure ALJs are insulated from undue influence from HHS. The plan outlines the circumstances under which performance standards can be applied to ALJs without threatening their independence. However, other than meeting time frames prescribed by law, the plan proposes no standards nor does it describe the process that might be used to develop such standards. Element 12: Independence of ALJs Despite the fact that the independence of ALJs is critical to ensuring due process to appellants, the plan is silent on what steps will be taken to shield ALJs from real or perceived external pressures, including pressure from elsewhere in HHS, which is tasked with overseeing the Medicare program. ALJs throughout the federal government may have to issue rulings against the agencies that employ them. However, since SSA became an independent agency in 1994, OHA ALJs hearing Medicare appeals, as SSA employees, have not been in this position. The plan notes that SSA has a long history of maintaining independence of ALJs. MMA required that the plan provide information on steps to be taken to ensure the independence of ALJs hearing Medicare appeals once this function has been transferred to HHS. However, the plan merely repeats MMA’s requirement—that the HHS ALJ unit will report solely to the Secretary of HHS and that it will be separate from CMS. The plan provides no information about the proposed, new organizational structure, nor does it specify who, in terms of title and duties, will direct and manage the HHS ALJ unit. Furthermore, the plan does not define the relationship of ALJs to other HHS offices, such as CMS and the MAC—with which the ALJ unit will have to communicate and coordinate—or where, organizationally, the ALJ unit will be housed. The plan also does not include standards that either HHS, or the new ALJ unit, could use to evaluate whether the independence of the ALJ unit is being achieved. Similarly, the plan makes no reference to the steps that will be taken to ensure the objectivity of ALJ training. Finally, the plan does not recognize the possibility that the independence of the ALJ unit could be questioned nor does it specify a contingency plan to ensure—and if necessary, restore—the continued independence of ALJs. Element 13: Performance Standards The plan addresses the appropriateness of establishing performance standards for ALJs, as required by MMA. Although the plan acknowledges that it is important that ALJs adhere to the new time frames for processing appeals as established by BIPA, it is unclear whether any other performance standards for ALJs will be established. The plan notes that the law allows the imposition of “administrative practices and programming policies that ALJs must follow,” including timeliness of decisions, so long as the agency does not use the guidelines to influence the ALJs’ decisions. In addition, the plan holds that it is not unreasonable to expect a minimum level of efficiency and that ALJs can be disciplined for “good cause,” which may be based on performance or unacceptably low productivity. However, the plan does not discuss whether such guidelines will be imposed, by what means the agency would evaluate a minimum level of efficiency, who would evaluate the judges, and what actions might be taken based on unsatisfactory findings. Similarly, the plan does not include specific steps the agency would take to ensure that any guidelines and performance standards that are imposed would not interfere with ALJ independence. Finally, the plan does not address how ALJs would be evaluated should any new standards be challenged. SSA and HHS have stressed their commitment to ensuring a successful transfer of the ALJ level of the Medicare appeals process from SSA to HHS. Addressing the 13 elements specified in MMA and developing and implementing contingency provisions are key to ensuring that the transition is smooth and that services to appellants are not disrupted. Although both agencies have stressed that they are continuing to further develop details of the plan, based on the information they have developed thus far, we believe that the plan does not comprehensively address the 13 elements and, thus, seriously jeopardizes a successful and timely transition. For example, the absence of specific milestones, the use of unreliable data, and the lack of an acknowledgement that HHS may ultimately need to develop two separate processing systems to adhere to current practices and those required by BIPA are serious shortcomings. Moreover, the absence of details related to providing appellants access to ALJs, hiring and training staff with expertise in Medicare, and preserving ALJ independence further undermine the plan’s credibility. The plan’s lack of specific details jeopardizes HHS’s ability to begin adjudicating appeals as scheduled. Unless SSA and HHS act quickly to effectively address the 13 elements required by MMA and finalize the transition plan for transferring responsibility for adjudicating Medicare appeals from SSA to HHS, the appeals process could be compromised. To help ensure a smooth and timely transition of the Medicare appeals workload from SSA to HHS, we recommend that the Secretary of HHS and the Commissioner of SSA take steps to complete a substantive and detailed transfer plan. Specifically, we recommend that the Secretary and Commissioner take the following six actions: Prepare a detailed project plan to include interim and final milestones, individuals or groups responsible for completing key elements essential to the transfer, and contingency plans. Validate data and perform analyses to support decisions regarding key elements, such as workload, staffing needs, and costs. Outline a strategy that addresses the possible need for two separate processing systems at HHS—one for appeals that follows the current processing practices and one that complies with BIPA’s time frames and other requirements—in the event that the BIPA provisions establishing the QICs are not implemented as scheduled. Identify where staff and hearing facilities—including videoconference equipment—are needed as well as opportunities to share staff and office space. Develop an approach to ensure that ALJs and support staff with Medicare expertise can be hired, and that all staff are adequately trained to process and adjudicate Medicare appeals. Define the relationship of HHS’s ALJ unit to the other organizations within the department, and identify safeguards that will be established to ensure decisional independence. We provided a draft of this report to both SSA and HHS for their review. In its written comments, HHS agreed with all but one of our recommendations. HHS said that contingency plans for several plan elements—regulations, feasibility of precedential authority, independence of ALJs, and performance standards—were unnecessary. Because of the critical nature of these provisions and the inter-dependence of the plan’s components, we continue to believe that the establishment of such plans for each congressionally mandated element would best ensure a smooth and timely transition. Further, HHS emphasized that it attempted to ensure that it provided us with the most current information available regarding decisions associated with the transition. However, we do not believe that HHS has kept us fully apprised of all of its efforts. For example, in its comments, HHS described the establishment of the Office of Medicare Hearings and Appeals Transition and the activities of this new office related to the transfer. Although HHS indicated that this office was established in July 2004, before our work was complete, this information was not shared with us. In addition, although HHS noted several other efforts to enhance the transition process—such as its analysis of internal data to make caseload projections for fiscal years 2005 and 2006—this information also was not provided to us during the course of our work. Although this, and other efforts HHS cited to facilitate the transfer of Medicare appeals might have promise, we had no opportunity to evaluate them. We are also concerned with HHS’s characterization of our findings and its own progress in implementing the transfer. For example, HHS interprets figure 2 in our report as indicating that we believe that the plan meets substantially all MMA requirements. However, figure 2 clearly shows that 5 of the 13 plan elements do not completely address these requirements. Moreover, figure 2 shows that the plan lacks detailed information and contingency plans for the vast majority of the elements. Such significant deficiencies suggest that a smooth and timely transfer may be in jeopardy. HHS also stated that the public comments it received concerning the plan were positive. Our information does not support this assertion. Our evaluation of these comments showed that they mirrored the concerns addressed in our report and raised serious questions about the ability of SSA and HHS to effect the transfer in a manner that would preserve the independence of ALJs and ensure the quality of service to appellants. In its written comments, SSA agreed with our recommendations by either expressing its concurrence or by citing steps it has taken to aid with their implementation. SSA also noted that it shared our concern that adequate planning needs to take place and agreed that detailed contingency planning is important. Although SSA’s comments focused on its continuing contribution to enhance HHS’s understanding of the current Medicare appeals process, it also emphasized that some elements of the plan are the sole responsibility of HHS. While we agree that HHS must ultimately assume full and complete responsibility for the appeals process, until the transition is complete, we believe that both agencies are accountable for ensuring that appeals are adjudicated promptly and competently, and for coordinating their efforts so that the transfer occurs on a smooth and timely basis. Finally, both SSA and HHS expressed concern with the title of our report. HHS said that the title might raise unnecessary fears among the advocate and beneficiary communities. Further HHS stated that it is on track for an efficient and effective transfer of the ALJ function at the earliest possible time allowed by the MMA. Although HHS indicated that much progress has been made in key areas, such as development of regulations and the assurance of ALJ independence, it provided no new information in support of these efforts. In addition, many other significant questions raised in our report, such as the geographic distribution of ALJs, were not addressed in its comments. Therefore, we continue to have significant concerns about the agencies’ abilities to effectuate the transfer on a timely basis. Both agencies also reported that they had identified a mechanism for HHS to continue to use SSA ALJs to adjudicate Medicare appeals after the statutory date of the transfer, if necessary. However, neither SSA nor HHS described this mechanism and we therefore were unable to evaluate it. Consequently, we continue to believe that our evaluation of the evidence supports the report title. SSA’s and HHS’s comments are reprinted in appendixes II and III, respectively. We are sending copies of this report to the Secretary of HHS, the Commissioner of SSA, and other interested parties. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. We will also make copies available to others upon request. If you or your staffs have any questions about this report, please call me at (312) 220-7600. An additional GAO contact and other staff members who prepared this report are listed in appendix IV. Based on our review of the plan and additional materials provided by the transfer team, we found that the plan to transfer the Medicare appeals function from the Social Security Administration to the Department of Health and Human Services is insufficient to ensure a smooth and timely transition. Although the plan generally addresses each of the 13 elements mandated by the Medicare Prescription Drug, Improvement, and Modernization Act of 2003 (MMA), as indicated in figure 2, it omits important details on how each element will be implemented. Furthermore, the plan overlooks the need for contingency provisions, which could prove to be essential, should critical tasks not be completed in a timely manner. Margaret Weber, Craig Winslow, Shirin Hormozi, and Barbara Mulliken made key contributions to this report. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.”
The Medicare appeals process has been the subject of widespread concern in recent years because of the time it takes to resolve appeals of denied claims. Two federal agencies play a role in deciding appeals--the Department of Health and Human Services (HHS) and the Social Security Administration (SSA). Currently, neither agency manages and oversees the entire multilevel process. In the Medicare Prescription Drug, Improvement, and Modernization Act of 2003 (MMA), Congress mandated that SSA transfer its responsibility for adjudicating Medicare appeals to HHS between July 1, 2005, and October 1, 2005. In addition, it directed the two agencies to develop a transfer plan addressing 13 specific elements related to the transfer. GAO's objective was to determine whether the plan is sufficient to ensure a smooth and timely transition. Transferring the Medicare appeals workload from SSA to HHS requires careful preparation and the precise implementation of many interrelated items. The transfer is mandated to take place no later than October 1, 2005. SSA and HHS have stressed their commitment to ensuring a successful transfer of the administrative law judge (ALJ) level of the Medicare appeals process, and both agencies have emphasized that they are continuing to further develop details of the plan. Although the plan generally addresses each of the 13 elements mandated by MMA, it omits important details on how each element will be implemented. Furthermore, the plan overlooks the need for contingency provisions, which could prove to be essential, should critical tasks not be completed in a timely manner. GAO believes that this essential information is needed to facilitate a smooth and timely transfer. Its absence makes it unclear how the transfer plan will be implemented and threatens to compromise service to appellants.
Over the last few decades, the number of participants and the complexity of the market for home mortgage loans in the United States has increased. In the past, a borrower seeking credit for a home purchase would typically obtain financing from a local financial institution, such as a bank, a savings association, or a credit union. This institution would normally hold the loan as an interest-earning asset in its portfolio. All activities associated with servicing the loan including accepting payments, initiating collection actions for delinquent payments, and conducting foreclosure if necessary would have been performed by the originating institution. Over the last few decades, however, the market for mortgages has changed. Now, institutions that originate home loans generally do not hold such loans as assets on their balance sheets but instead sell them to others. Among the largest purchasers of home mortgage loans are Fannie Mae and Freddie Mac, but prior to the surge in mortgage foreclosures that began in late 2006 and continues today, private financial institutions also were active buyers from 2003 to 2006. Under a process known as securitization, the GSEs and private firms then typically package these loans into pools and issue securities known as mortgage-backed securities (MBS) that pay interest and principal to their investors, which included other financial institutions, pension funds, or other institutional investors. As shown in figure 1, as of June 30, 2010, banks and other depository institutions that originate and hold mortgages accounted for about 28 percent of all U.S. mortgage debt outstanding. Over 50 percent of the mortgage debt was owned or in MBS issued by one of the housing GSEs or covered by a Ginnie Mae guarantee. About 13 percent were in MBS issued by non-GSEs—known as private-label securities, with the remaining 5 percent being held by other entities, including life insurance companies. With the increased use of securitization for mortgages, multiple entities now perform specific roles regarding the loans, including the mortgage servicer, a trustee for the securitized pool, and the investors of the MBS that were issued based on the pooled loans. After a mortgage originator sells its loans to another investor or to an institution that will securitize them, another financial institution or other entity is usually appointed as the servicer to manage payment collections and other activities associated with these loans. Mortgage servicers, which can be large mortgage finance companies or commercial banks, earn a fee for acting as the servicing agent on behalf of the owner of a loan. In some cases, the servicer is the same institution that originated the loan and, in other cases, it may be a different institution. The duties of servicers for loans securitized into MBS are specified in a contract with investors called a pooling and servicing agreement (PSA) and are generally performed in accordance with certain industry-accepted servicing practices—such as those specified in the servicing guidelines issued by the GSEs. Servicing duties can involve sending borrowers monthly account statements, answering customer service inquiries, collecting monthly mortgage payments, maintaining escrow accounts for property taxes and hazard insurance, and forwarding proper payments to the mortgage owners. In exchange for providing these services, the servicer collects a servicing fee, usually based on a percentage of at least 0.25 percent, of the loans’ unpaid principal balance annually. In the event that a borrower becomes delinquent on loan payments, servicers also initiate and conduct foreclosures in order to obtain the proceeds from the sale of the property on behalf of the owners of the loans, but servicers typically do not receive a servicing fee on delinquent loans. When loans are sold, they are generally packaged together in pools and held in trusts pursuant to the terms and conditions set out in the underlying PSA. These pools of loans are the assets backing the securities that are issued and sold to investors in the secondary market. Another entity will act as trustee for the securitization trust. Trustees act as asset custodians on behalf of the trust, keeping records of the purchase and receipt of the MBS and holding the liens of the mortgages that secure the investment. Trustees are also the account custodians for the trust—pass- through entities that receive mortgage payments from servicers and disperse them among investors according to the terms of the PSA. Although trustees are the legal owners of record of the mortgage loans on behalf of the trust, they have neither an ownership stake nor a beneficial interest in the underlying loans of the securitization. However, any legal action a servicer takes on behalf of the trust, such as foreclosure, generally is brought in the name of the trustee. The beneficial owners of these loans are investors in MBS, typically large institutions such as pension funds, mutual funds, and insurance companies. Figure 2 shows how the mortgage payments of borrowers whose loans have been securitized flow to mortgage servicers and are passed to the trust for the securitized pool. The trustee then disburses the payments made to the trust to each of the investors in the security. The mortgage market has four major segments that are defined, in part, by the credit quality of the borrowers and the types of mortgage institutions that serve them. Prime—Serves borrowers with strong credit histories and provides the most attractive interest rates and mortgage terms. This category includes borrowers who conform to the prime loan standards of either Fannie Mae or Freddie Mac and are borrowing an amount above the GSE federally mandated upper limit, known as “jumbo loans.” Nonprime—Encompasses two categories of loans: Alt-A—Generally serves borrowers whose credit histories are close to prime, but loans have one or more high-risk features such as limited documentation of income or assets or the option of making monthly payments that are lower than required for a fully amortizing loan. Subprime—Generally serves borrowers with blemished credit and features low down payments and higher interest rates and fees than the prime market. borrowers who may have difficulty qualifying for prime mortgages but features interest rates competitive with prime loans in return for payment of insurance premiums or guarantee fees. The Federal Housing Administration and Department of Veterans Affairs operate the two main federal programs that insure or guarantee mortgages. Across all of these market segments, two types of loans are common: fixed-rate mortgages, which have interest rates that do not change over the life of the loan; and adjustable-rate mortgages (ARM), which have interest rates that can change periodically based on changes in a specified index. The nonprime market segment recently featured a number of nontraditional products. For example, the interest rate on Hybrid ARM loans is fixed during an initial period then “resets” to an adjustable rate for the remaining term of the loan. Another type of loan, payment-option ARM loans, allowed borrowers to choose from multiple payment options each month, which may include minimum payments lower than what would be needed to cover any of the principal or all of the accrued interest. This feature is known as “negative amortization” because the outstanding loan balance may increase over time as any interest not paid is added to the loan’s unpaid principal balance. If a borrower defaults on a mortgage loan secured by the home, the mortgage owner is entitled to pursue foreclosure to obtain title to the property in order to sell it to repay the loan. The mortgage owner or servicer generally initiates foreclosure once the loan becomes 90 days or more delinquent. Once the borrower is in default, the servicer must decide whether to pursue a home retention workout or other foreclosure alternative or to initiate foreclosure. State foreclosure laws establish certain procedures that mortgage servicers must follow in conducting foreclosures and establish minimum time periods for various aspects of the foreclosure process. These laws and their associated timelines may vary widely by state. As shown in figure 3, states generally follow one of two methods for their foreclosure process: judicial, with a judge presiding over the process in a court proceeding, or statutory, with the process proceeding outside the courtroom in accordance with state law. Because of the additional legal work, foreclosure generally takes longer and is more costly to complete in the states that primarily follow a judicial foreclosure process. Several federal agencies share responsibility for regulating the banking industry and securities markets in relation to the origination and servicing of mortgage loans. Chartering agencies oversee federally and state- chartered banks and their mortgage lending subsidiaries. At the federal level, OCC oversees federally chartered banks. OTS oversees savings associations (including mortgage operating subsidiaries). The Federal Reserve oversees insured state-chartered member banks, while FDIC oversees insured state-chartered banks that are not members of the Federal Reserve System. Both the Federal Reserve and FDIC share oversight with the state regulatory authority that chartered the bank. The Federal Reserve also has general authority over lenders that may be owned by federally regulated holding companies but are not federally insured depository institutions. Many federally regulated bank holding companies that have insured depository subsidiaries, such as national or state-chartered banks, also may have nonbank subsidiaries, such as mortgage finance companies. Under the Bank Holding Company Act of 1956, as amended, the Federal Reserve has jurisdiction over such bank holding companies and their nonbank subsidiaries that are not regulated by another functional regulator. Other regulators are also involved in U.S. mortgage markets. For example, Fannie Mae’s and Freddie Mac’s activities are overseen by the Federal Housing Finance Agency. Staff from the Securities and Exchange Commission also review the filings made by private issuers of MBS. Federal banking regulators have responsibility for ensuring the safety and soundness of the institutions they oversee and for promoting stability in the financial markets and enforcing compliance with applicable consumer protection laws. To achieve these goals, regulators establish capital requirements for banks, conduct on-site examinations and off-site monitoring to assess their financial condition, and monitor their compliance with applicable banking laws, regulations, and agency guidance. Among the laws that apply to residential mortgage lending and servicing are the Fair Housing and Equal Credit Opportunity Acts, which address credit granting and ensuring non-discrimination in lending; the Truth in Lending Act (TILA), which addresses disclosure requirements for consumer credit transactions; and the Real Estate Settlement Procedures Act of 1974 (RESPA), which requires transparency in mortgage closing documents. Entities that service mortgage loans that are not depository institutions are called nonbank servicers. In some cases these nonbank servicers are subsidiaries of banks or other financial institutions, but some are also not affiliated with financial institutions at all. Nonbank servicers have historically been subject to little or no direct oversight by federal regulators. We have previously reported that state banking regulators oversee independent lenders and mortgage servicers by generally requiring business licenses that mandate meeting net worth, funding, and liquidity thresholds. The Federal Trade Commission is responsible for enforcing certain federal consumer protection laws for brokers and lenders that are not depository institutions, including state-chartered independent mortgage lenders. However, the Federal Trade Commission is not a supervisory agency; instead, it enforces various federal consumer protection laws through enforcement actions when complaints by others are made to it. Using data from large and subprime servicers and government-sponsored mortgage entities representing nearly 80 percent of mortgages, we estimated that abandoned foreclosures are rare—the total from January 2008 to March 2010 represents less than 1 percent of vacant homes. When servicers’ efforts to work out repayment plans or loan modifications with borrowers who are delinquent on their loans are exhausted, staff from the six servicers we interviewed said they analyze certain loans to determine whether foreclosure will be financially beneficial. Based on our analysis of loan data provided by these six servicers covering the period of January 2008 through March 2010, servicers most often made this decision before initiating foreclosure, but in many cases did not discover that foreclosure would not be financially beneficial until after initiating the process. While we estimated that instances in which servicers initiate but then abandon a foreclosure without selling or taking ownership of a property had not occurred frequently across the United States, certain communities experienced larger numbers of such abandoned foreclosures. Specifically, we found abandoned foreclosures tended to be for properties in economically distressed communities and low-value properties and nonprime and securitized loans. When borrowers default on their loans, home mortgage loan servicers take a variety of actions in an effort to keep them in their homes, by, for example, working out repayment plans and loan modifications. The stakeholders that we interviewed—including servicers, regulators, and government and community officials—agreed that pursuing efforts to keep borrowers in their homes were preferable to foreclosure. According to servicers’ representatives, servicers engage in various efforts to reach borrowers during the delinquency period through letters, phone calls, and personal visits. For example, representatives of one servicer noted that on a typical foreclosure company representatives make over 120 phone calls and send 10 to 12 inquiries to borrowers in an effort to bring payments up to date or modify the loan. As borrower outreach continues, servicers also send “breach” letters after borrowers have missed a certain number of payments warning borrowers of the possibility of foreclosure. However, if these initial efforts to bring the borrower back to a paying status are not successful, staff from the six servicers we contacted— representing about 57 percent of U.S. first-lien mortgages—told us they typically determine whether to initiate foreclosure as a routine part of their collections and loss mitigation process after a loan has been delinquent for at least 90 days. Representatives of servicers told us that they might decide to initiate foreclosure even though they were still pursuing loan workout options with a borrower. One noted that the initiation of foreclosure, in certain instances, might serve as an incentive for the borrower to begin making mortgage payments again. According to the staff of the six servicers we interviewed, they usually conduct an analysis of certain loans in their servicing portfolio before initiating foreclosure to determine if foreclosure will be financially beneficial. These analyses—often called an equity analysis—compare the projected value the property might realize in a subsequent sale against the sum of all projected costs associated with completing the foreclosure and holding the property until it can be sold. Servicers use the results of these equity analyses to decide whether to foreclose on a loan or conduct a charge-off in lieu of a foreclosure. In general, if the equity analysis indicates that the projected proceeds from the eventual sale of the property exceeds that of the projected costs of reaching that sale by a certain amount, the servicer will proceed with the foreclosure. However, when the costs of foreclosure exceed the expected proceeds from selling the property, servicers typically decide that foreclosure is not financially beneficial. In these cases servicers will usually cease further foreclosure- related actions, operationally charging off the loan from its servicing roles, and advising the mortgage owner—GSEs or other private securitized trusts—that the loan should be acknowledged as a loss by the loan’s owner. In determining which loans to charge off in lieu of foreclosure, some servicers maintain thresholds for property values or potential income from pursuing foreclosure. For example, some of the servicers we interviewed told us that they usually, but not always, considered charge- offs in lieu of foreclosure on properties with values roughly below $10,000 to $30,000. Freddie Mac servicing guidance requires a review for charge off in lieu of foreclosure when the unpaid principal balance of a loan is below $5,000 on conventional mortgages or less than $2,000 on government insured or guaranteed loans, such as Federal Housing Administration or Department of Veterans Affairs mortgages. Based on our reviews of bank regulatory guidance and discussions with federal and state officials, no laws or regulations exist that require servicers to complete foreclosure once the process has been initiated. Therefore, servicers can abandon the foreclosure process at any point. Furthermore, according to staff from the servicers we interviewed, initiating foreclosure can cost as little as $1,000, and these costs may be recovered from the proceeds of any subsequent sale of the property. Based on our analysis of servicer data, servicers most often charged off loans in lieu of foreclosure without initiating foreclosure proceedings. However, in many cases the decisions to charge off loans in lieu of foreclosure were made after foreclosure initiation, and a significant portion of these represented abandoned foreclosures. We obtained data from six servicers including four of the largest servicers and two servicers that specialized in nonprime loans. These six servicers collectively serviced about 30 million loans, representing 57 percent of outstanding first-lien home mortgage loans as of the end of 2009. According to our analysis of the servicer-reported data, these six servicers decided to conduct charge-offs in lieu of foreclosure for approximately 46,000 loans between January 2008 and March 2010, as shown in table 1. For over 27,600 loans, or about 60 percent, the servicers made the decision to charge off in lieu of foreclosure without initiating foreclosure proceedings. Of these loans, over 19,400, or 70 percent of the properties, were occupied by the borrower or a tenant. As will be discussed later in this report, when properties remain occupied they are less likely to contribute to problems in their neighborhoods generally associated with foreclosed and vacant properties. However, in other cases, servicers initiated foreclosure but later decided to conduct a charge-off in lieu of foreclosure. Charge-offs in lieu of foreclosure that occurred after a foreclosure was initiated were more likely to result in a vacant property than charge-offs that occurred without a foreclosure initiation. As shown in table 1 earlier, these six servicers initiated foreclosure on over 18,300 loans between January 2008 and March 2010 that they later decided to charge off in lieu of foreclosure. For over 8,700, or 48 percent of these loans, this decision was associated with a vacancy and, therefore, an abandoned foreclosure–that is, a property for which foreclosure was initiated but not completed and is vacant. We found a statistically significant association between foreclosure initiation and vacancy for the charge-offs in lieu of foreclosure in our sample. That is, we found that initiating and then suspending foreclosure was associated with a higher probability that a property will be vacant. A potential reason that vacancies occur more frequently when servicers decide to pursue a charge-off in lieu of foreclosure after initiating foreclosure than before is confusion among borrowers about the impact of the foreclosure initiation. Specifically, local and state officials, community groups, and academics told us that borrowers may be confused about their rights to remain in their homes during foreclosure and vacate the home before the process is completed. Alternatively, servicers could be more likely to pursue a charge-off in lieu of foreclosure if a property becomes vacant before foreclosure initiation since the value of the property may deteriorate rapidly. Nevertheless, as the data show even when servicers opt to conduct a charge-off in lieu of foreclosure before initiating foreclosure, some borrowers may still vacate the home. Anecdotally, we heard from a variety of stakeholders that this decision could be due to financial hardship or pressure exerted by the lender in collecting delinquent mortgage payments, among other reasons. Data indicating the overall number of abandoned foreclosures in the United States did not exist nor was such information being collected by the federal government agencies we contacted or by organizations in the states or local communities that we reviewed. Local governments, bank regulators, and private organizations collect information on foreclosures, vacancies, and housing market conditions, but for various reasons the phenomenon of abandoned foreclosures goes largely unrecorded. Local officials we spoke with in Baltimore, Chicago, Cleveland, Detroit, and Lowell, Massachusetts, identified similar difficulties in tracking abandoned foreclosures. For example: Accurately identifying the lender and borrower on a given property is often difficult due to outdated or incorrect mortgage information. Ascertaining which properties are abandoned foreclosures is often difficult because formal data on the foreclosure status of properties often do not exist. Determining whether properties are actually vacant is often difficult if a house has been used seasonally or as a rental. Nonetheless, researchers in some cities we visited are attempting to compile data. In Cleveland, academic researchers have used court documents in an attempt to ascertain the reason a sample of foreclosure cases have stalled. In a number of cities, such as Chula Vista, California, the city governments have enacted ordinances that require lenders to register homes that become vacant. In Buffalo, a nonprofit organization has collected information on the status of foreclosure cases in Erie county, where Buffalo is located. Although subject to uncertainty, we estimated that the number of abandoned foreclosures that occurred in the United States between January 2008 and March 2010 was between approximately 14,500 and 34,600. As will be discussed, although the potential number of abandoned foreclosures creates significant problems for certain communities, they represent less than 1 percent of vacant properties and an even smaller percentage of the total housing stock. Table 2 shows abandoned foreclosures as a percent of various housing market metrics. To determine the prevalence of abandoned foreclosures in the entire U.S. market, we estimated the number of properties (1) were charged off in lieu of foreclosure after a foreclosure was initiated and (2) that are vacant. In developing our estimate, we used the data from the six mortgage servicers and data from Fannie Mae and Freddie Mac—which together represent roughly 80 percent of outstanding U.S. mortgages—and augmented this information with vacancy data from USPS. Using this information, we estimated the total number of abandoned foreclosures nationwide under varying assumptions about the remaining 20 percent of the mortgages outstanding. According to the data reported to us, abandoned foreclosures represent a small portion of overall vacancies in the United States, but are highly concentrated in a small number of communities. Based on our analysis of servicer data from January 2008 to March 2010, we found abandoned foreclosures in 2,452 of the approximately 43,000 postal zip codes throughout the country, but only 167 of those zip codes have 10 or more of these properties. From January 2008 through March 2010, several zip codes in Chicago, Cleveland, Detroit, Indianapolis, and other large cities had 35 or more abandoned foreclosures. We found several zip codes in Detroit that had over 100 abandoned foreclosures. In addition, several smaller areas contain zip codes with high concentrations of the properties, such as those including Toledo, Akron, and Youngstown, Ohio; Flint, Michigan; Fort Myers, Florida; and Gary and Fort Wayne, Indiana. Analyzing abandoned foreclosures at the U.S. Census-designated Metropolitan Statistical Area (MSA) level also suggests that such cases are likely to be concentrated in a limited number of communities. According to our analysis, 80 percent of the total abandoned foreclosures that we identified in our servicer data were in 50 of the roughly 400 MSAs; 20 MSAs account for 61 percent of the properties; and 30 MSAs account for 72 percent. Table 3 shows the MSAs with the most abandoned foreclosures. Because the data we used to produce these estimates may not be generalizeable, the location of the remaining abandoned foreclosures could differ from that suggested in table 3. For example, the Flint, Michigan; Orlando-Kissimmee, Florida; South Bend-Mishawaka, Indiana; and Canton-Massillon, Ohio, MSAs are notable examples just outside the top 20. Although not having a large number of abandoned foreclosures, some small MSAs throughout the Midwest are likely to be similarly challenged by the existence of such properties given their size. As shown above in table 3, these 20 MSAs had roughly 5,090 properties that were charged off in lieu of foreclosure by the servicer without initiating foreclosure but were also vacant in our sample. Because these also are properties on which the servicer will no longer be conducting any maintenance or attempting to sell to a new owner, the properties can create similar problems for their communities as those resulting from abandoned foreclosures. Certain community, property, and loan characteristics may help to explain some of the concentrations of abandoned foreclosures. In particular, based on our sample, abandoned foreclosures occured most frequently in economically struggling areas and distressed urban areas of particular cities We also found these properties in areas that experienced significant recent booms and declines in housing. In general abandoned foreclosures are also more likely to involve low-value properties and nonprime and securitized loans. Economically struggling cities appear to experience the greatest number of charge-offs in lieu of foreclosure and therefore, abandoned foreclosures. As shown in figure 4, most of the abandoned foreclosures have occurred in Midwestern industrial MSAs. In particular, our analysis of servicer data indicates that over 50 percent of all the abandoned foreclosures we identified were in Michigan, Indiana, and Ohio. Seven of the 20 MSAs with the most abandoned foreclosures are located in Ohio. Recent research also supports that this type of phenomenon is occurring largely in industrial Midwestern states. Although the deterioration of economic conditions in 2008 and 2009 has impacted the entire nation, these Midwestern areas have been especially hard hit with population declines, high unemployment, and decreases in housing values. For example, Detroit lost about 28 percent of its population from 1980 to 2006 and the unemployment rate in Michigan was 13.0 percent versus 9.6 percent nationally as of September 2010. According to a recent report, although Michigan did not seem to experience a dramatic appreciation in housing prices before the surge in mortgage foreclosures that began in late 2006, it did witness a significant decline in housing prices after 2006, largely because the automobile manufacturing industry was severely hit by the current crisis. Like many areas in the United States, several of the MSAs in table 3 experienced significant increases in unemployment rates. For example, the unemployment rate in the Detroit-Warren-Livonia MSA increased from 4.2 percent in December 2000 to 16.1 percent in December 2009. Similarly, in the Flint, Michigan, MSA, the unemployment rate increased by more than 10 percentage points between 2000 and 2009. High unemployment may have exacerbated the negative consequences of nonprime lending activity. For example, community development officials in Detroit explained that many people who did not have mortgages on their homes were enticed to obtain a home equity loan to make repairs, then lost their homes to foreclosure because they lost their jobs or the payments were not sustainable. However, many of the economic problems facing areas such as Cleveland, Detroit, and other Midwest cities where we identified large numbers of abandoned foreclosures predate the economic turmoil that started around 2008. For example, in 2007, the poverty rate in Flint, Michigan, was 16.8 percent, the poverty rate in Memphis, Tennessee, was 18.8 percent, and the poverty rates in both Toledo and Youngstown, Ohio, were 14.8 percent. Consequences of these challenges include weak real estate markets and other characteristics that are associated with abandoned foreclosures. Abandoned foreclosures are also likely concentrated in distressed urban areas. Our analysis shows that distressed urban areas within MSAs had significant numbers of abandoned foreclosures. In cities with high property values like Chicago, we found that abandoned foreclosures were largely driven by activity in a few zip codes. Our analysis also shows that, on average, the zip codes with the most abandoned foreclosures had larger declines in home prices (37 percent) compared to the national average of 32 percent following peak levels in 2005. Some distressed zip codes in Detroit, Michigan, had an over 60 percent drop in home prices from the peak levels between 2004 and 2006. Stakeholders also told us that abandoned foreclosures were most often associated with urban areas with largely minority populations, high foreclosure rates, blight, crime, and vandalism. For example, one academic speculated that there may be pockets of distressed housing in the inner parts of cities whose housing markets as a whole may not be so bad; these areas likely have low value houses that may end up as abandoned foreclosures. In addition, one servicer representative said that abandoned foreclosures could be found in the urban core of any large city. Concentrations of abandoned foreclosures have also occurred in areas that experienced significant house price increases followed by declines. States such as California, Florida, Nevada, and Arizona experienced the largest increase in property values prior to 2006 also have experienced the largest decreases in property values in the last few years. For example, according to a recent report, property values in these states spanned 47 percent from peak to trough. As a result, these states have many underwater borrowers—that is, borrowers who owe more on their mortgages than their properties are worth (negative equity). Significant overdevelopment and overspeculation prior to the economic crisis also may have caused investors to abandon their properties after housing prices declined. For example, representatives of a community group in Atlanta told us that starting in 2000 in a neighborhood close to downtown Atlanta investors increasingly constructed new housing on speculation. Representatives said that some of this new construction was never occupied, and after house prices began to decline in early 2007, much of it was vandalized. Without a market for these properties servicers may have subsequently abandoned foreclosures on many of these properties because they would not earn enough at foreclosure sale to cover losses associated with foreclosure and disposition of these properties. Among the 20 MSAs in table 3, Jacksonville, Cape Coral-Fort Myers, Tampa-St. Petersburg-Clearwater, Miami-Fort Lauderdale-Pompano Beach, and, to a lesser extent, Atlanta, appear to fit into the category of housing boom- related abandoned foreclosures. For example, according to Global Insight estimates, average home prices in the Miami-Fort Lauderdale-Pompano Beach increased 144 percent from the end of 2000 to the second quarter of 2007 before declining by 40 percent from 2007 to the third quarter of 2010. Regardless of the city or neighborhood, most abandoned foreclosures occur on low-value properties. Data from servicers, Fannie Mae, and Freddie Mac indicate that foreclosures are most often not completed on properties with low values. Evidence from the econometric model that we applied to GSE loan-level data also suggests that lower property values increased the likelihood that a loan would be charged off in lieu of foreclosure rather than being subject to alternative foreclosure actions such as a deed-in-lieu of foreclosure or short sale. For example, the median value of the properties Freddie Mac decided to charge of in lieu of foreclosure was $10,000 compared to $130,000 for deeds in lieu of foreclosure, $158,000 for modifications and $160,000 for short sales. Similarly, the median value of loans for which the six servicers decided to charge off in lieu of foreclosure in Michigan and Ohio was $25,000. In addition, servicer representatives told us properties with low values—such as those valued under $30,000—were the most likely candidates for decisions to not pursue foreclosure. Some properties may even have negative values because of the liabilities attached to them. For example, a property in Cleveland valued at $5,000 may have an $8,000 demolition lien levied against it; therefore, it may actually cost more to pay off the demolition lien than the property is worth. Abandoned foreclosures also occurred most frequently on nonprime loans. Our analysis shows that about 67 percent of all abandoned foreclosures that we identified were associated with nonprime loans. Adjustable rates were also a prominent feature of these loans. Anecdotally, stakeholders also told us that abandoned foreclosures most likely occurred on properties where borrowers had nonprime loans and unstable financing. For instance an official for a community development corporation in greater Cleveland told us he had seen about 12 instances of abandoned foreclosures in the past year, and many of the borrowers in these cases had two mortgages or subprime loans originated in 2003 or later. The vast majority of abandoned foreclosures were loans that involved third-party investors including those that were securitized into private label MBS. GSE-purchased loans account for a very small portion of our estimated number of abandoned foreclosures. Although the GSE loans made up roughly 63 percent of the data we collected from servicers, they accounted for less than 8 percent of the total abandoned foreclosures during 2008 through the first quarter of 2010. Similarly, we found that only about 0.3 percent of abandoned foreclosures were associated with FHA, VA, and Ginnie Mae insured loans. The potential for abandoned foreclosures to occur on loans associated with Fannie Mae also appears to have been reduced as Fannie Mae representatives told us that as of April 2010 they have instructed servicers to complete all foreclosures pending Fannie Mae’s revision of its charge-off in lieu of foreclosure procedures to make sound economic decisions as well as stabilize neighborhoods. About 66 percent of the total abandoned foreclosures were associated with non-GSE third-party investors. We estimate that a significant portion of these loans were securitized into residential MBS, although data issues precluded us from distinguishing between private label MBS and whole loans held by third parties in some cases. Abandoned foreclosures, similar to other vacant properties, further contribute to various negative impacts for the neighborhoods in which they occur, for the local governments, and for the homeowners. In addition, because local governments are not aware of servicers’ decisions to no longer pursue foreclosure on these properties, they cannot take expedited actions to return the properties to productive use. Properties for which the mortgage servicers have abandoned the foreclosure proceedings are often left without any party conducting routine care and maintenance, which often results in properties with poor appearance and sometimes unsafe conditions. As a result, abandoned foreclosures can create unsightly and dangerous properties that contribute to neighborhood decline. Academics, housing and community groups representatives, local government officials, and others in the 12 locations we collected information from generally told us that, like other vacant and abandoned properties, abandoned foreclosures often deteriorated quickly. They explained what types of damage can result, including structural damage, mold, broken windows, and trash, among other things. Representatives of a national community reinvestment organization described the impact of vacant homes nationwide, from swimming pools filled with dirty, discolored water in Florida to homes in the Midwest that have sustained damage from falling trees that no one removes. A Cleveland official said that, in a 2-year period, about 20 vacant homes in one ward had caught fire and that people used vacant properties to dump trash and asphalt. While touring abandoned foreclosures in some of the neighborhoods in the communities we visited, we observed several vacant and abandoned properties that showed various signs of property deterioration, including overgrown grass, accumulated trash or other debris, and broken windows. Because abandoned foreclosures, by definition, are vacant properties, they create similar problems as other vacant properties do for communities. Figure 5 presents pictures of abandoned foreclosures and other vacant properties in several of the communities we visited. Abandoned foreclosures also create problems in communities because homes in foreclosure proceedings that become vacant in certain neighborhoods are often quickly stripped of valuable materials, furthe depressing their value. Housing and community group representatives, a local government officials, told us that looters strip vacant houses of copper piping, wiring, appliances, cabinets, aluminum siding, and o ther valuables, usually within a few weeks of the time at which the property became vacant, but sometimes within 24 hours. An official from a foreclosure response organization in one Midwestern city told us that a thriving industry of home salvage thieves exists in the city and an official from a non-profit housing organization in another Midwestern city told u that junkyards in the area accept things they should not, such as aluminum siding and refrigerators—and this provides an incentive for criminals to strip houses of any materials of potential salvage value. Representatives from a national property maintenance company that operates across the country told us a house can be secured, including having its windows and doors boarded up and entrances locked, only to be broken into and stripped of any valuable parts. Similarly, a local official told us that many houses in Chicago are secured with steel grates, but vandals will bypass these and cut a hole in the roof or brick to gain access—and, once inside, they will rip the house apart by sawing into the walls and cutting out the wiring and piping. A local official in another city reported that several ga explosions have occurred at vacant properties there recently due to vandals stealing pipes while the gas was still flowing to the home. Staff from a national property maintenance company told us that mortgage servicers contract with them to inspect the properties of homeowners whose loans become delinquent and that in certain locations, they often have to re-secure properties at every monthly inspection because such properties are constantly being broken into and damaged. In addition, a code enforcement official told us that vandalism had become such an issue for the city that a sign left on a property’s door indicating that it had a code violation would serve as a flag to thieves to strip the house. F representatives from two national co told us that, as a result of vandalism, exposure, and neglect, vacant properties can become worthless. Similar to other vacant and uncared for properties, abandoned foreclosures also can create public safety concerns. Staff from an entity that advises local governments on community development explained abandoned foreclosures that remain vacant for extended periods pose significant public health, safety, and welfare issues at the local level. Although unable to identify which properties were abandoned foreclosures, local government officials in Detroit said that safety issues that associated with vacant properties were the primary reason they had identified 3,000 vacant properties that were to be demolished in 2010. Of these, they said that 2,100 had been deemed dangerous and that 400 were considered so hazardous that they were considered emergency situation noting that a firefighter had recently been killed when he entered a property and a floor caved in. Likewise in Fort Myers, Florida, officials told us that 1,200 to 1,300 of the city’s 1,600 vacant and abandoned properties were considered unsafe. A Cleveland official told us that, wh housing inspectors discovered a vacant property with a code violation, th city was compelled to act to address the potential danger, or it may be liable for any subsequent injuries. Officials from this same office further noted that the public money that is used to fund the land bank—which may take in unsafe and abandoned properties—may have otherwise been used for civic uses, such as teacher salaries. Like other vacant properties, abandoned foreclosures also contribute to neighborhood decline by providing venues for a wide variety of crimes. Local government and other officials told us that vacant and abandoned properties were subject to break-ins, drug activity, prostitution, arson, and squatting, among other things. A study of the City of Chicago noted that some vacant building fires were the result of arson by owners seeking to make insurance claims and that others were started by squatters making fires to keep warm. Other empirical studies have found relationships between vacant or foreclosed properties and crime. For example, a national organization representing municipal governments reports that crime is moderately correlated with vacant and abandoned properties, deteriorating housing and high divestments in the neighborhood. Another study of central city Chicago found that a 2.87 percentage point increase in the foreclosure rate would yield a 6.68 percent increase in the rate of violent crimes such as assault, robbery, rape, and murder. The author of this study explains the weaker positive relationship between foreclosure and property crimes, such as theft and vandalism, may be due to an under- reporting of such crimes in lower-income areas. Another impact of abandoned foreclosures is that, like other vacant and uncared for properties, they negatively affect the value of surrounding properties. Although property values have fallen sharply in many region around the country as part of the recent economic recession, man those we interviewed said that vacant properties and abandoned foreclosures compounded this problem. One local official explained th once a few properties in a neighborhood became vacant, the negative effects tended to spiral and lead to further foreclosures and vacancies, particularly in low-income neighborhoods. In addition, empirical s have found that vacant and abandoned properties, together with foreclosures, can cause neighboring property values to decline. For example, using data from 2006 in Columbus, Ohio, a recent study found that each vacant property within 250 feet of a nearby home could d h its sales price by about 3.5 percent, whereas the impact from eac foreclosure was less severe, but had a wider impact out into the neighborhood. In addition, an author for a federal research organization reviewed several research papers on foreclosure’s price-depressing impact on sales of nearby properties and reported that, according to the lite rature, this impac percent. t can range from as little as 0.9 percent to as much as 8.7 Because local government officials are not aware that foreclosure are no longer being pursued, these properties remain vacant and contribute to neighborhood decline for longer periods of time. Instead of actions learning that servicers are charging off loans in lieu of foreclosure a not assume responsibility for maintenance, local government staff responsible for enforcing housing codes told us they typically find out about vacant and abandoned properties through citizen complaints, vaca property registration ordinances, or on their own initiative. They noted that, by the time they become aware of a property for which a servicer is no longer taking responsibility, the property may have been vacant and deteriorating for months or years, which exacerbates the overall neighborhood decline. Several stakeholders noted that, if local governments were made aware of properties for which servicers were charging off the loans in lieu of foreclosure, they may be able to take mor timely action. For example, they could take expedited actions to acquire the vacant property—such as through the use of a land bank—and return it to productive use. Abandoned foreclosures also increase costs for local governments because they must expend resources to inspect properties and mitigate their unsafe conditions. Within local communities, code enforcemen t departments are largely responsible for ensuring that homeowners maintain their properties in accordance with local ordinances regarding acceptable appearance and safety. In cases in which such ordinances are not being complied with, code enforcement departments can typically f violating property owners or take actions themselves, such as making repairs or boarding up doors or windows and billing the property owner for these expended costs. However, code enforcement and other o told us that it is often difficult to locate the owners of abandoned foreclosures because they have left their homes; they also told us that it is difficult to locate current mortgage lien holders—who generally have interest in maintaining the properties. Officials said that one reason identifying lien holders is difficult is because they often fail to record changes in ownership with local jurisdictions. To address the challenge, the code enforcement manager of one of the cities we visited told us that he had made one of his field staff a full-time “foreclosure specialist” who job it was to research owners and lien holders of foreclosed properties with identified code violations. The new foreclosure specialist told us that he uses several different avenues to find property owners and lien h including county court records, local realtors, property manager property maintenance companies, and the Mortgage Electronic s, Registration Systems (MERS®). In addition, another code enforcement manager told us that he had developed a team of investigators train ed in skip tracing violators. to increase the division’s ability to identify and locate Local governments are often burdened by having to pay for the maintenance or demolition of abandoned foreclosures. In the interest of public safety, code enforcement departments will often take action when they cannot identify or contact another responsible party. Researchers tallied total costs of over $13 million for code enforcement activities to address and maintain all vacant and abandoned properties for eight Ohio cities in 2006. In addition, the City of Cleveland, Ohio, has budgeted over $8 million of federal grant money for demolition and has already expend nearly $5 million. Recent literature, as well as our interviews with local officials, further revealed the burden some local governments are experiencing due to an increase in the amount of vacant and abandoned properties: A 2005 report estimated the direct municipal costs of an abandoned foreclosure to be $19,227 in the City of Chicago with a fire, the cost can be as high as $34,199. —and if it is a severe case The same study reported that the cost of boarding up a single-family hom one time was $900, but noted that, because multiple times, the true cost was $1,445. In a 2008 study, the City of Baltimore report police and fire services showed an annual increase of $1,472 for each vacant and unsafe property on that block. Code enforcement officials for a city in Fl orida reported that they spent ver $120,000 to mow lawns of vacant properties in 2008; this was up from o less than $30,000 in 2006 and prior years. for another city in Florida told us they have 850,000 in outstanding code invoices for boarding up or mowing lawns $ for abandoned properties. Code enforcement officials for a county in Florida reported that prior to 2007, the number of code enforcement cases against properties in foreclosure was not significant enough to warrant tracking; however, in 2008, after the department began to identify and track these properties because of the noticeable increase in citizen complaints, statistics reveal at 25 percent of all their cases involved properties in foreclosure—and th as of May 2010, they had 443 active cases against properties in foreclosure. A Cleveland official reported an approximately $80,000 increase in rior year. She said these osts were related to hiring additional staff to support existing staff with personnel costs for code enforcement over the p c research, documentation, and court testimony. When local governments maintain or demolish properties, they typically may place liens against the properties for the associated costs. I jurisdictions, these liens may have the same first-priority status as tax liens and may, therefore, be relatively easily recovered, but in other jurisdictions these liens may have lower priority. In one jurisdiction, were told that code enforcement liens were wiped out when the foreclosure was completed. A case study of Chicago estimated that between 2003 and 2004 the city recovered only about 40 cents on each dollar it spent for demolition. we Abandoned foreclosures also burden local governments with reduced property tax revenues. Local jurisdictions directly lose tax revenue from vacant and abandoned properties in two ways: (1) property taxes owed the property owner sometimes go unpaid and are not recouped, (2) a loss of tax value of a property when a structure is demolished. In addition, abandoned foreclosures contribute to falling housing values, wh erode the property tax base. For example, researchers calculated tha 2006, the City of Cleveland lost over $6.5 million due to the tax delinquency on vacant and abandoned structures, and over $409,000 demolished. Moreover, one local official told us because structures were that every 1 percent decline in home values costs the City of Cleveland $1 million in tax revenue. Abandoned foreclosures also contribute to an increased demand for cit services. As discussed, abandoned foreclosures result in an increased demand for code enforcement related services—including demolition, boarding of windows, removing trash, mowing the lawn, and a range of other activities intended to keep the unit from becoming an eyesore. Abandoned foreclosures also result in a variety of other muni including increased policing and firefighting, building inspections, legal fees, and increased demand for city social service programs. Abandoned foreclosures also increase the difficulty of transferring the property to another owner, which can increase the potential for the property to contribute to problems within a community. If a borrower remains in the home or in contact with the servicer, title to the property u of can be transferred to a new owner through short sales or deed-in-lie foreclosure actions. If homeowners vacate their properties and cann reached, these alternative means of transferring title cannot occur. However, in these cases, the servicer can complete the foreclosure process where title is transferred to a new owner—either a third par buyer or the lien holder where the property is then held in its or the servicer’s real estate-owned inventory. However, when the servicer abandons the foreclosures, this transfer of title does not occur. Without this transfer, abandoned foreclosures may remain vacant for extended periods of time, with recent media and academic reports labeling such properties as being in “legal limbo” or having a “toxic title.” One academic we interviewed said abandoned foreclosures result in property titles that lack transparency and cannot be easily transferred; another academic told ty us that uncertainty about a property’s ownership and status may make it hard for neighborhood groups or cities to determine what actions can be taken to dispose or sell such property. According to a recent report by a national rating agency, most properties associated with charged-off loans will ultimately be claimed by municipalities for back taxes, which according to stakeholders may not be an efficient process. Abandoned foreclosures can also create confusion among the bor over the status of their properties and their responsibilities for such properties. According to representatives of counseling agencies, community groups, and some of the homeowners we interviewed, borrowers are often surprised to learn that the servicer did not complete the foreclosure and take title to the house—and that they still own the property and are responsible for such things as maintenance, taxes, an d code violations. A nonprofit law firm representative said that borrowers who thought that they had lost their homes through foreclosure were sometimes brought to housing court for code violations. For example, a court record from Buffalo City indicates that one individual appeared in court to address code violations 3 years after receiving a judgment of foreclosure. According to the record, after the judgment o there was no sale of the property. While in court, this individual claimed that she did not believe that she still owned the property. Although creating various negative impacts on neighborhoods and communities, abandoned foreclosures have not significantly affected state s and federal foreclosure-related programs because most of these programtry to prevent foreclosure and some only apply to properties still occupied by homeowners. In response to the surge in mortgage foreclosures t began in late 2006 and continues today, several states created task forces to address the crisis. According to a 2008 report by a national trade association, the main objectives of almost every task force created as of March 2008 was to get practical help directly to “at risk” homeown for example, creating consumer hotlines, and developing outreach and educational programs designed to encourage homeowners to get counseling. In addition, we spoke with a legislative analyst for a nation organization who told us that over the past 3 years state legislatures have enacted many laws focusing on such topics as payment assistance and loan programs, regulating foreclosure scam artists, ensuring homeow and tenants receive proper foreclosure notice, shortening or lengt the foreclosure process, and implementing mediation or counseling programs. The federal government has also implemented several foreclosure-related programs, most of which focus on foreclosure prevention and require that borrowers live in their homes. For example, the federal Home Affordable Modification Program (HAMP) is a program designed to help borrowers avoid foreclosure and stay in their homes by providin g incentives for servicers to perform loan modifications; however, HAMP requires as a pre-condition that borrowers currently live in thei homes. The term “abandoned” was originally defined as a property that had been foreclosed upon and was vacant for at least 90 days. HUD expanded the definition to include properties where (a) mortgage, tribal leasehold, or tax payments are at least 90 days delinquent, or (b) a code enforcement inspection has determined that the property is not habitable and the owner has taken no corrective actions within 90 days of notification of the deficiencies, or (c) the property is subject to a court-ordered receivership or nuisance abatement related to abandonment pursuant to state, local, or tribal law or otherwise meets a state definition of an abandoned home or residential property. Therefore, there is no longer a programmatic barrier preventing NSP grantees from acquiring abandoned foreclosures. On behalf of GAO, a national nonprofit organization e-mailed structured questions to 25 NSP grantees, including NSP 1 and NSP 2 grantees, and their subrecipients. Various servicer practices may be contributing to the number of abandoned foreclosures. These practices include initiating foreclosure without obtaining updated property valuations and obtaining valuations that did not always accurately reflect property or neighborhood conditions or other costs, such as delinquent taxes or code violation fines. By not always obtaining updated property valuations at foreclosure initiation, servicers appeared to increase the potential for abandoned foreclosures to occur. As described earlier, after a certain period of loan delinquency—usually around 90 days—has passed, officials from the six servicers that we interviewed representing about 60 percent of the nation’s home mortgages told us that they make a determination about whether to initiate foreclosure. Representatives of servicers told us they take into account various information about the property when deciding whether to initiate foreclosure and some servicers conduct an equity analysis on certain loans to determine if the expected proceeds from a sale will cover foreclosure costs. However, the valuations used in these analyses might be outdated at the time of foreclosure initiation and staff from four of the six servicers told us that they did not always obtain updated information on the value of the property at the time they conducted this analysis and initiated foreclosure. The representatives from one servicer told us that the company performs an equity analysis on loans in its own portfolio before foreclosure initiation. However, for loans serviced for Fannie Mae, Freddie Mac, or third-party investors, this servicer follows the applicable servicing agreement or guidance, which may not require such analyses or updated property valuations. Instead, the company initiates foreclosure automatically when one of these loans reaches a certain delinquency status. Only two of the six servicers we interviewed reported updating property valuations on all loans before initiating foreclosure. Even when servicers obtain updated property valuations, this information does not always reflect actual property or neighborhood conditions, which can also increase the likelihood of servicers commencing foreclosure but then abandoning it. Representatives of the six servicers we interviewed said that property inspections begin in the early stages of delinquency and continue on a regular basis, but that information collected during inspections—information relevant to the resale value of a property, such as vacancy status and property damage—is not used in developing property valuations. Most of the servicers we interviewed reported using automated valuation models (AVM) to estimate property values, not necessarily taking into consideration property-specific conditions. Furthermore, servicers we interviewed said they do not incorporate information on property and neighborhood conditions obtained from property inspections in their valuations. Simply using a BPO or AVM without consideration of up-to-date property or neighborhood conditions may result in abandoned foreclosures because the actual resale value and accurate expected proceeds from foreclosure sale may not be reflected in the valuation. Another servicer practice that appeared to increase the potential for an abandoned foreclosure was that servicers generally were not considering local conditions that can affect property values prior to initiating foreclosure. Our interviews with the six servicers indicated that they did not always adjust property valuations to take into consideration potential steep declines in value due to factors specific to neighborhoods or city blocks. Staff from most of the servicers we interviewed reported that in some areas a property that was occupied and well-maintained when foreclosure was initiated could become vacant and be vandalized and decline in value. Similarly, local government officials said that homes with resale value could be stripped of raw building materials during the foreclosure process, leaving them practically worthless. As previously discussed, representatives of community groups and local governments told us that properties are sometimes vandalized within 24 hours of becoming vacant. In Detroit, for example, according to officials, property values can be seriously impacted by vacancy due to vandalism and rapid decay of vacant properties. Data from one property maintenance company contracted to inspect and secure homes undergoing foreclosure indicated that 29 percent of the properties it oversaw nationwide had some property damage in the 6 months from January to June 2010. In Detroit, about 54 percent of its properties had incurred damage. In addition, not considering other costs, such as local taxes and potential for code violation fines, associated with a property before initiating foreclosure can increase the likelihood that a foreclosure would be abandoned. For example, local taxes owed or code violations and fines can add significant costs to the foreclosure process. Servicers told us that they may abandon foreclosures because of the amount of tax owed on the property. Tax liens are commonly placed on delinquent properties when borrowers are unable to pay property taxes. Unattended properties or those with damage can often accumulate local municipality code violation fines that can also decrease the net proceeds that the servicer will gain from completing a foreclosure. These fines vary widely, but in some cities fines may accrue while a property is in delinquency and foreclosure, and over time the assessed fines can exceed a property’s value. The unpaid taxes and code violation fees that may accumulate during foreclosure can encourage servicers to abandon the foreclosure because they serve to reduce the net proceeds that the servicer would realize in completing the foreclosure. In some cases, the circumstances that lead to servicers initiating but then abandoning a foreclosure appeared to be those that could not have been anticipated at the time the decision to initiate foreclosure was being made. For example, property inspections and valuations usually include only information about external conditions of properties, potentially leaving internal damage or conditions such as lead paint or contaminated drywall undetected. Addressing these internal problems could be costly. Unexpected fires or other natural disasters can also leave properties with very low values. If such damages are discovered or occur after foreclosure was initiated, servicers may decide that completing the foreclosure is not warranted. When servicers do not have updated or complete information about property and neighborhood values and conditions before initiating foreclosure, the likelihood that they will commence then abandon foreclosures increases. Representatives of servicers said that they did not always obtain updated valuations before initiating foreclosure because they did not think it was necessary or because they were not required to do so. Instead, they generally obtained more current information only after foreclosure initiation, such as when preparing for the foreclosure sale. In cases where this valuation indicates that the value of the property was insufficient to justify completing the foreclosure process, the servicers generally stop the foreclosure and charge off the loan in lieu of foreclosure. However, by that time the property may already be vacant and negatively impacting the neighborhood. As previously discussed, our servicer data indicates that charge-offs in lieu of foreclosure that occurred after foreclosure was initiated were associated with a higher rate of vacancy than when the charge-off occurred prior to foreclosure initiation. Academics, local government officials, community groups, servicers, and others expressed mixed views on whether mortgage servicers have financial incentives to initiate foreclosure even in cases in which they were unlikely to complete the process. For example, accounting requirements for mortgage loans do not appear to provide incentives for servicers to initiate foreclosures but then not complete them. First, most mortgage loans that servicers are managing are being serviced on behalf of other owners. As a result, any accounting requirements applying to such loans do not financially affect the servicer’s financial statements because these loans are not their assets. However, servicers that service loans for other owners do carry the expected value of the servicing income they earn on such loans on their own financial statements as an asset. The reported value of this servicing rights asset would be reduced if a serviced loan is charged off and no more servicing income is expected from it. However, this reduction would occur regardless of whether foreclosure has been initiated or not. If the servicer of a mortgage loan is also the holder (owner) of the loan, accounting requirements also do not appear to provide incentive to initiate foreclosure. For the six servicers from whom we obtained data, 7 percent of the loans were owned by the servicing institution, meaning accounting decisions made by the servicer directly affect the institution financially. For these loans, bank regulatory rules require servicers to mark the value of delinquent loans down to their collateral value (or charge off the loan) after the loan is 180 days past due, regardless of whether foreclosure has been initiated or not. As a result, servicers then cannot avoid recognizing the loss by, for example, abandoning the foreclosure, because the loan’s loss of value is already reflected in their accounting statements. Furthermore, financial institutions holding loans in their own portfolio must develop reserve accounts for expected losses on their books. Thus, they have to anticipate any declines in property values for loans in their portfolio and start setting aside funds to cover any losses at specific points in the delinquency cycle. Whether the property is carried to foreclosure sale or charged off, the loss has already been reflected in their loan value accounts. For private label securitized loans that are being sold to private investors and serviced in pools, servicers do not appear to have incentives to delay or abandon foreclosure due to investors’ potential motivation to postpone accounting for losses on those securities. According to OCC officials, a single charge-off for a loan held in a pool would not necessarily lead to a devaluation or write-down of the value of the overall pool of loans. In addition, they said that whether the value of a security is written down depends on several factors, including overall losses to the pool, liquidity, and interest rate changes. Thus, investors have some discretion under accounting guidance in deciding when to write down securitized assets. Further, public accounting standards require investors holding mortgage- backed securities to either set aside loss reserves and write down the value of impaired assets. Therefore, abandoning or postponing foreclosure completion would be unlikely to have an advantage to the security. Some academics or local government officials we interviewed were concerned that servicers may have an incentive to initiate foreclosures even though they might later abandon the process in order to continue profiting from servicing mortgages. However, in servicers’ and experts’ descriptions of servicing practices, such incentives were called into question. Servicers can derive part of their revenue from imposing fees to borrowers who are past due with payments, and do not need to forward this revenue on to investors. Therefore, some stakeholders suggested servicers might initiate foreclosure in an effort to accrue late fees and other charges associated with servicing the loan during the foreclosure process. In addition, some stakeholders suggested that servicers might continue earning income from other financial interests they might own on the property, such as a second lien mortgage. However, five of the six servicers we interviewed reported that they stopped charging fees once a loan enters foreclosure as assessed fees are unlikely to be fully collected on loans in foreclosure. In addition, servicers might not continue yielding profits on second-lien mortgages because second-lien mortgages were much less prevalent on subprime first lien mortgages, which were often found in areas with very low housing values, such as Detroit and Cleveland, compared to high-price areas, such as California, according to a 2005 study. 59, 60 Finally, servicers and other experts told us that servicers do not have to initiate foreclosure in order to stop advancing payments on loans. Sean Dobson, Laurie Goodman, Mortgage Modifications: Where Do We Go From Here, Amherst Securities Group LP (July 2010). Charles A. Calhoun, The Hidden Risks of Piggyback Lending (Annandale, Va.: June 2005). government and private mortgage insurance and guarantees require that foreclosure be completed before claims are paid. For example, FHA mortgage insurance and VA guarantees, which cover a portion of potential losses from loan defaults, require a claimable event, such as a foreclosure sale, short sale, or deed-in-lieu of foreclosure before servicers can collect on a claim. Representatives of mortgage insurers also said that they could not pay an insurance claim on an abandoned foreclosure because the bank did not hold the title. Similarly, the GSEs may provide servicers incentives to complete foreclosures in order to receive reimbursements. Fannie Mae requires servicers to submit final requests for reimbursement of advances after the foreclosure sale and after any claims have been filed with other insurers or guarantors. Mortgage servicers’ foreclosure activities were not always a major focus of bank regulatory oversight, although federal banking regulators have recently increased their attention to this area, including the extent to which servicers were abandoning foreclosures. Various organizations have regulatory responsibility for most of the institutions that conduct mortgage servicing, but some of these institutions do not have a primary federal or state regulator. According to industry data, OCC—which oversees national banks—is the primary regulator for banks that service almost two-thirds of loans serviced by the top 50 servicers. The Federal Reserve oversees entities that were affiliated with bank holding companies or other state member banks that represented about 7 percent of these loans. Entities that are not chartered as or owned by a bank or bank holding company accounted for about 4 percent of the top 50 servicers’ volume. Some states require mortgage servicers (including state-chartered banks) to register with the state banking department, according to state banking supervisors we interviewed. These supervisors also told us that most banks that were chartered in their states did not service mortgage loans. According to our analysis, only a few of the top 50 servicers were state-chartered banks that were not members of the Federal Reserve System. According to our interviews with federal banking regulators, mortgage servicers’ practices, including whether they were abandoning foreclosures, have not been a major focus covered in their supervisory guidance in the past. The primary focus in these regulators’ guidance is on activities undertaken by the institutions they oversee that create the significant risk of financial loss for the institutions. Because a mortgage servicer is generally managing loans that are actually owned or held by other entities, the servicer is not exposed to losses if the loans become delinquent or if no foreclosure is completed. As a result, the extent to which servicers’ management of the foreclosure process is addressed in regulatory guidance and consumer protection laws has been limited and uneven. For example, guidance in the mortgage banking examination handbook that OCC examiners follow when conducting examinations of banks’ servicing activities notes that examiners should review the banks’ handling of investor-owned loans in foreclosure, including whether servicers have a sound rationale for not completing foreclosures in time or meeting investor guidelines. In contrast, the guidance included in the manual Federal Reserve examiners use to oversee bank holding companies only contained a few pages related to mortgage servicing activities, including directing examiners to review the income earned from the servicing fee for such operations, but did not otherwise address in detail foreclosure practices. In addition, until recently, the extent to which these regulators included mortgage servicing activities in their examinations of institutions was also limited. According to OCC and Federal Reserve staff, they conduct risk- based examinations that focus on areas of greatest risk to their institutions’ financial positions as well as some other areas of potential concern, such as consumer complaints. Because the risks from mortgage servicing generally did not indicate the need to conduct more detailed reviews of these operations, federal banking regulators had not regularly examined servicers’ foreclosure practices on a loan-level basis, including whether foreclosures are completed. For example, OCC officials told us their examinations of servicing activities were generally limited to reviews of income that banks earn from servicing loans for others and did not generally include reviewing foreclosure practices. Staff from the federal banking regulators noted that no federal or state laws or regulations require that banks complete the foreclosure process; therefore, banks are not prohibited from abandoning foreclosures. In addition, many of the federal laws related to mortgage banking, such as the TILA, are focused on protecting borrowers at mortgage origination, and Federal Reserve officials reported that none of the federal consumer protection laws specifically addressed the process for foreclosure. As a result, the Federal Reserve staff who conduct consumer compliance exams also have not focused on how servicers interact with borrowers during the default and foreclosure process. Further, OCC officials said that, even if examiners observed banks they supervised abandoning the foreclosure process, the practice would not negatively impact the bank’s overall rating for safety and soundness. These officials said that a bank’s need to protect its financial interest might override concerns about walking away from a home in foreclosure. However, in recognition of the ongoing mortgage crisis in the United States, staff from OCC and the Federal Reserve told us that their examiners have been focusing on reviewing servicers’ loan modification programs, including those servicers participating in the federal mortgage modification program, HAMP. As potential problems with foreclosure- related processes and documentation at major servicers emerged, these two regulators have also increased examination of servicer foreclosure practices. OCC staff responsible for examinations at one of the large national banks that conducts significant mortgage servicing activities told us that they had obtained data on loans that were charged-off and foreclosure was not pursued and found that the practice was very rare and typically involved loans on low-value properties. OCC examiners acknowledged that abandoned foreclosures—due to their association with neighborhood crime and blight—could pose a reputation and litigation risk to the bank. For example, we found that some servicers and lenders have been sued by various municipalities over their servicing or lending activities. The Federal Reserve has also recently increased its attention to mortgage servicing among the institutions over which it has oversight responsibility. In the past, the Federal Reserve has not generally included nonbank subsidiaries of bank holding companies that conduct mortgage servicing in their examination activity because the activities of these entities were not considered material risks to the bank holding company. However, in 2007, the Federal Reserve announced a targeted review of consumer compliance supervision at selected nonbank subsidiaries that service loans. Additionally, in October 2009, the Federal Reserve began a loan modification initiative, including on-site reviews, to assess whether certain servicers under its supervisory authority—including state member banks and nonbank subsidiaries of bank holding companies—were executing loan modification programs in compliance with relevant federal consumer protection laws and regulations, individual institution policies, and government program requirements. In addition, as part of its ongoing consumer compliance examination program, the Federal Reserve incorporated loan modification reviews into regularly scheduled examinations of state member banks, as appropriate. Federal Reserve officials noted that as of October 2010 these reviews and examinations were still in progress; however, initial work identified two institutions that were engaging in abandoned foreclosure practices. Federal Reserve officials reported that, in general, no federal regulation or law explicitly requires that servicers notify borrowers when they decide to stop pursuing a foreclosure after the foreclosure process had been initiated. Nevertheless, Federal Reserve staff instructed the servicers to do so as a prudent banking practice. According to Federal Reserve officials, the institutions agreed to do so. Because abandoned foreclosures do not necessarily violate any federal banking laws, supervisors did not take any actions against the institutions. Other federal and state regulators that review servicers’ activities also reported having little insight into servicers’ foreclosure practices and decisions to abandon foreclosures, particularly those with non-GSE loans, which account for the greatest numbers of abandoned foreclosures. Officials from the Securities and Exchange Commission, which receives reports on publicly traded residential mortgage-backed securities, told us that they did not examine servicers’ policies or activities for these securitized assets. Furthermore, SEC’s accounting review of publicly traded companies engaged in mortgage servicing included aggregate trends in foreclosure activity but not incomplete foreclosures on individual loans. While the Federal Housing Finance Agency Federal Property Manager’s Report includes data on charge-offs in lieu of foreclosure, FHFA also does not routinely examine whether Fannie Mae and Freddie Mac are abandoning foreclosures. Like the banking regulators, they also said they had focused most of their oversight on the institutions’ loan modification and pre-foreclosure efforts. In addition, the Federal Trade Commission (FTC) may also pursue enforcement actions against nonbank institutions that violate the FTC Act or consumer protection laws. However, FTC staff told us they did not think that either the unfair and deceptive acts and practices provision of the FTC Act or the Fair Debt Collections Practices Act would apply to an institution that walked away from a home in foreclosure, as a general matter. State banking regulators that we interviewed said that they conduct little oversight of servicers’ foreclosure practices given the limited number of state-chartered banks that conduct mortgage servicing activities. However, several examiners and industry association officials we interviewed acknowledged the need to obtain further information about the foreclosure process and improve their examination process for nonbank mortgage servicers. Other entities that review servicers’ activities also do not review servicers’ foreclosure practices or decisions to abandon foreclosures. Representatives from private rating agencies that evaluate mortgage servicers’ told us that although they review servicers’ handling of loans in default and the overall average length of time servicers take to complete foreclosure, they do not track specific loans to see if foreclosure was completed because it would not be a specific trigger for downgrading that security’s rating. In addition, representatives of institutions that serve as trustees for large numbers of pooled assets in an MBS pool told us that they sought to ensure that servicers forwarded payments to investors and noted that trustees did not provide management oversight of servicers’ decisions on how to handle loans. We identified various actions that some communities are taking to reduce the likelihood of abandoned foreclosures occurring or reduce the burden such properties create for local governments and communities. Communities dealing with abandoned foreclosures may benefit from implementing similar actions, but they may need to weigh the appropriateness of the various actions for their local circumstances as these actions can require additional funding, have unintended consequences, and may not be appropriate for all communities. In addition, these actions generally were designed to address vacant properties overall; therefore, they may not fully address the unique impacts of abandoned foreclosures. Officials from local governments, community groups, and academics told us that borrowers often leave their homes before the foreclosure sale even though they are entitled to stay in their homes at least until the sale. Although borrowers may leave for a variety of reasons, we consistently heard that many borrowers leave because they believe that servicers’ initial notices of delinquency and foreclosure initiation mean that they must immediately leave the property. For example, a representative of a counseling group in Chicago told us that many people, especially the elderly and non-native English speakers, do not understand notices that they receive from servicers and think that they are being told to leave their homes. Some jurisdictions are taking steps to increase borrowers’ awareness of their rights during foreclosure through counseling. A variety of counseling and mediation resources are already available to borrowers. For example, HUD sponsors housing counseling agencies throughout the country to provide free foreclosure prevention assistance and provides references to foreclosure avoidance counselors. In addition, according to a national research group, at least 25 foreclosure mediation programs were in operation in 14 states across the country as of mid-2009 to encourage borrowers and servicers to work together to keep people in their homes and avoid foreclosure. Officials from local governments and community groups, servicers, and an academic noted that increasing the use, visibility, and resources of counseling efforts could provide avenues to educate borrowers about their rights to remain in their homes during the foreclosure process and prevent vacancies. To increase the visibility and use of counseling resources, the state of Ohio implemented a hotline phone number to help refer borrowers to counselors and a Web site to provide information about foreclosure. In addition, local officials have credited a recent law in Michigan with helping to educate borrowers about their rights during the foreclosure process. The Michigan law allows borrowers a 90-day delay in the initiation of foreclosure proceedings if they request a meeting with a housing counselor and a servicer representative to try to arrange for a loan modification. Representatives of community groups, local governments, and servicers were generally supportive of efforts to educate borrowers about their rights during foreclosure, and a recent study has demonstrated the effectiveness of such counseling on keeping people in their homes. In our interviews, representatives of a servicer and local government and a researcher noted that counseling could be more effective at educating borrowers about their rights than servicers’ efforts because borrowers might be more willing to talk to a counselor than to a bank representative. Representatives of a law firm also noted that local staff might reach more borrowers and achieve better results than bank representatives because the local individuals have a better understanding of local conditions and homeowners could work with the same individual rather than with bank representatives who change with each contact. Community group and servicer representatives also noted that counseling is most effective at keeping people in their homes if it is offered soon after a borrower first becomes delinquent because they are more likely to leave their homes later in the foreclosure process. In addition, a November 2009 study found that homeowners who received counseling were about 1.6 times more likely to get out of foreclosure and avoid a foreclosure sale—possibly allowing them to remain in their homes—with counseling than without. Local community representatives noted that increased counseling may not completely prevent abandoned foreclosures for several reasons. First, counselors cannot reach every borrower needing assistance. For example, officials from a community group and counseling agencies said that some borrowers might not be aware that counseling is available or might be too embarrassed about their situation to seek assistance. Second, the quality of counseling may limit its effectiveness. Researchers noted that the quality of counseling can be uneven and organizations that are not HUD- approved or foreclosure rescue scams could mislead borrowers about their rights. Third, representatives of research and advocacy groups we interviewed also noted that increased funding for counseling efforts would allow counseling agencies to expand and help more homeowners. Another action that some local governments are taking to address the problems of vacant properties, including abandoned foreclosures, is to require servicers to register vacant properties. As previously discussed, one of the major challenges confronting code enforcement officials is identifying those responsible for maintaining vacant properties. Vacant property registration systems can attempt to address this problem by requiring servicers to provide the city with specific contact information for each vacant property they service. According to a national firm that contracts with servicers to maintain properties, nearly 288 jurisdictions have enacted vacant property registration ordinances as of February 2010. Although the structures of these ordinances vary, researchers generally classify them into two types. The first type of systems tracks all vacant and abandoned properties and their owners. For example, among the cities we studied, Baltimore, Maryland has implemented this type of registration system. The second type of systems attempts to hold the lender and servicer responsible for maintenance of vacant properties during the foreclosure process. According to the Fannie Mae and Freddie Mac uniform mortgage documents, although these mortgage contracts typically give servicers the right to secure abandoned properties and make repairs to protect property values, they do not necessarily obligate them to do so. The cities of Chula Vista, California, Cape Coral and Fort Myers, Florida, and Chicago, Illinois, for example, have implemented this second type of ordinance. New York state also enacted a similar law statewide. According to some local officials and researchers, the contact information in vacant property registration systems makes it easier for local code enforcement officials to identify the parties responsible for abandoned foreclosures and that holding mortgage owners accountable for vacant properties can reduce the negative impact of these properties on the community. For example, local officials we interviewed in some cities with vacant property registries said that most owners complied with their city’s registry requirements and noted that the registries had been effective at providing contacts for officials to call to resolve code violations on vacant properties. Several stakeholders, including local officials, researchers, and representatives of a community group also recommended the type of vacant property ordinance that holds servicers accountable for maintaining vacant properties during foreclosure. They noted that these types of ordinances could provide servicers with needed incentives to keep up vacant properties to avoid incurring additional costs and help them in determining whether to initiate foreclosure. Local officials and industry representatives told us that, while vacant property registration systems can help local governments identify some owners, they might not capture all owners, and some servicers found certain requirements overly onerous and outside of their legal authority to perform. Local officials in a couple of cities and one servicer representative told us that these systems might not capture all owners because those who did not want the responsibility of maintaining certain properties would choose not to register. Further, systems that do not require that properties be registered until after the foreclosure sale would not help officials identify those responsible for maintaining abandoned foreclosures. In addition, servicers’ representatives told us that complying with these ordinances can be burdensome. For example, servicers consider ordinances that require them to secure doors and windows with steel, install security systems, and perform capital improvements to vacant properties as onerous, according to an industry association. Servicers also reported having difficulty tracking and complying with multiple systems and said that they would prefer a uniform system with consistent requirements. Further, servicers and other industry representatives we spoke to believe servicers’ authority to perform work on properties they did not yet own as limited. Holding a mortgage on a property does not give the servicer right of possession or control over the property. Therefore, servicers argue that they cannot be held liable for conducting work on properties because they are not the titleholders until after a foreclosure sale. For example, representatives of one servicer told us that the company would take steps to prevent a property from deteriorating but was cautious about going onto a property it did not own. In addition, community groups, researchers, and other industry analysts have expressed concerns that such laws could have the unintended consequence of encouraging servicers to walk away from properties before initiating foreclosure to avoid the potential maintenance and related costs, which could have the same negative effects on neighborhoods and communities as abandoned foreclosures do now. State or local actions to streamline the foreclosure process for vacant properties could also reduce the number of abandoned foreclosures by decreasing servicers’ foreclosure costs and preserving the value of vacant properties. As we have seen, the length of the foreclosure process affects servicers’ foreclosure costs as well as the condition and value of a property. Some areas are implementing streamlining efforts. For example, a law was recently enacted in Colorado allowing servicers to choose a shortened statutory foreclosure process for vacant properties that provides for a foreclosure sale to be scheduled in half the time of the typical process, according to a state press release on the new law. In addition, some courts in Florida have created expedited foreclosure court dockets for uncontested cases in order to move a higher number of cases forward in the process. Shortening the time it takes to complete foreclosure could result in less decrepit properties that servicers could resell more easily and at a higher price than they might have been able to otherwise—thereby encouraging servicers to abandon fewer foreclosures. However, some stakeholders raised concerns about streamlined actions. First, servicers and other industry analysts note that determining whether properties were actually vacant could be difficult. Second, shortening foreclosure times is contrary to the trend among state and local governments across the country to enact laws such as foreclosure moratoriums that extend foreclosure timelines. Therefore, some raised concerns about ensuring that homeowners had appropriate opportunities to work out a solution within a shortened time frame. Third, another potential unintended consequence is that in judicial states, shortening the time frame for foreclosing on vacant properties by moving these cases to the head of the queue could lengthen the time frames for other cases, increasing servicers’ carrying costs on those properties. Other jurisdictions have attempted to require servicers to complete foreclosures once they have initiated them. For example, staff in one court we visited told us the judge requires a foreclosure sale to be scheduled within 30 days after the court enters a foreclosure judgment. If servicers do not comply, they can be held in contempt of court, fined, and perhaps serve jail time. Many local officials and researchers we interviewed suggested that foreclosure cases should be dismissed, that servicers should face fines, or that servicers should lose their right to foreclose or take other actions on a property if they do not take action on foreclosure proceedings or schedule a sale within a certain amount of time. These actions could reduce abandoned foreclosures because servicers would more thoroughly consider the benefits and costs of foreclosure before initiating the process, and once initiated, foreclosures would be completed in a timely manner. Others also said that these actions would quickly move properties out of the foreclosure process and into the custody of a servicer that local officials could then hold responsible for the property’s upkeep. However, others noted that such a requirement could result in missed opportunities to work out solutions with the borrower and that it could be difficult to enforce. For example, representatives of servicers and others told us that borrowers often sought such alternatives at the last minute before a foreclosure sale and that requiring servicers to complete all foreclosures would limit their ability to explore alternatives late in the foreclosure process. An academic and regulatory officials expressed concerns that servicers would incur additional expenses if they had to complete sales and take ownership of properties when doing so was not in their best interest and they would not be able to recover their costs. In addition, regulatory staff cautioned that such a requirement could cause servicers to walk away from properties before initiating foreclosure. This type of action also would be difficult to implement in a state with a statutory foreclosure process because there is not the same degree of public records tracking foreclosures in these states. Local actions to establish reliable outlets for servicers to easily and cheaply dispose of low-value properties could reduce the number of abandoned foreclosures by providing incentives for servicers to complete the process. As previously discussed, servicers told us that many properties that were abandoned foreclosures were those that would likely have been either too costly for servicers to take ownership of or not likely to have resulted in sufficient sale proceeds. Taking foreclosed properties into their own real estate ownership inventories can be costly to servicers as they must continue to pay for taxes and insurance, maintain a deteriorating property, and hire a broker to market the property for sale. According to a recent report, if servicers and their investors know that they will not be further burdened by costs for the property, they may be more willing to take title and transfer it to a government or nonprofit entity that will be able to begin moving the property back into productive use. The use of land banks is one alternative that some jurisdictions are attempting to use to address problems arising from large numbers of foreclosures and vacant properties. Land banks are typically governmental or quasi-public entities that can acquire vacant, abandoned, and tax-delinquent properties and convert them to productive uses, hold them in reserve for long-term strategic public purposes such as creating affordable housing, parks, or green spaces, or demolish them. Land banks can reduce the incidence of abandoned foreclosures by providing servicers a way to dispose of low-value properties that they might otherwise abandon. Sales or donations to land banks could help servicers reduce their foreclosed property inventories. For example, Fannie Mae and the Cuyahoga County Land Reutilization Corporation have an agreement whereby on a periodic basis Fannie Mae sells pools of very low- value properties to the land bank for 1 dollar, plus a contribution toward the cost of demolition. This agreement allows Fannie Mae to reliably dispose of pools of properties in a recurring transaction at pre-defined terms. Land bank officials from Cuyahoga County noted that they are in the process of negotiating similar agreements with several large servicers. Once it has acquired the properties, a land bank can help stabilize neighborhoods, such as by reducing excess and blighted properties through demolition or transferring salvageable properties to local nonprofits for redevelopment. According recent research, the Genesee County Land Bank in Flint, Michigan, has been credited with acquiring thousands of abandoned properties, developing hundreds of units of affordable housing, and being the catalyst for increasing property values in the community by more than $100 million between 2002 and 2005 through its demolition program. Although land banks can help reduce abandoned foreclosures or their negative effects, our interviews revealed potential challenges of implementing these banks. First, many of the local government officials we interviewed noted that land banks did not have enough resources to manage a large volume of properties. Land banks may be dependent on local governments for funding, and without a dedicated funding source it may be difficult for land banks to engage in long-term strategic planning. However, recently created land banks, such as those in Genesee and Cuyahoga counties, have developed innovative funding mechanisms that do not depend on appropriations from local governments. Second, some mentioned that contributions from servicers—such as the agreement between Fannie Mae and the Cuyahoga County Land Reutilization Corporation—could help defray land banks’ property carrying costs. Second, land banks may be limited in their authority to acquire or dispose of properties. For example, by design land banks tend to passively acquire and convert abandoned properties with tax delinquencies into new productive use. However, land banks can also be designed to actively and strategically acquire properties from multiple sources. The Cuyahoga County Land Reutilization Corporation, for example, has the authority to strategically acquire properties from banks, GSEs, federal or state agencies, and tax foreclosures. Third, some municipalities face political challenges in establishing land banks or local officials question whether they are needed. For example, according to an advisor to local governments on establishing land banks and a representative of a community group, the Maryland state legislature authorized the creation of a land bank in Baltimore, but its implementation fell through due to political differences at the city level. Further, some local officials we interviewed in Florida did not think land banks were needed in their areas because they expected the housing market to recover so that vacancies would not be a long-term problem. Similar to land banks, other methods for cities to acquire properties before or following foreclosure could also provide incentives to servicers to complete the foreclosure process for low-value properties rather than abandoning it. Some cities have negotiated specialized sale transactions with Fannie Mae and HUD. For example, HUD recently announced a partnership with the National Community Stabilization Trust (NCST) and leading financial institutions that account for more than 75 percent of foreclosed property inventory to provide selected state and local governments and nonprofit organizations the first opportunity to purchase vacant properties quickly, at a discount, and before they are offered on the open market. In addition, some cities have worked with Fannie Mae to purchase foreclosed and low-value properties. According to Fannie Mae, the City of St. Paul, Minnesota, has purchased 45 properties from the entity and has access to review Fannie Mae’s available properties to be able to submit an offer for a pool of properties before they are marketed. And, according to a representative of a national community development organization, with the broadened definitions of abandoned and foreclosed properties under the NSP program, local governments and other grantees will be able to work with servicers earlier in the foreclosure process to acquire such properties through short-sales, for example, which could discourage abandoned foreclosures. For example, one organization in Oregon is pursuing purchasing notes prior to foreclosure using some of the state’s Hardest Hit Fund money, which would save the servicer the costs of initiating and completing foreclosure. However, the ability of these types of programs to fully address the issue of abandoned foreclosures may be limited. For example, local officials and researchers said cities’ capacity to receive donations or acquire properties was limited because they did not have enough resources to manage properties. According to recent research, capacity constraints prevent most community development organizations from redeveloping enough vacant homes to reverse the decline of neighborhood home values. In addition, according to industry observers, and HUD and local government officials, local governments have not pursued many pre-foreclosure acquisitions, such as short sales and note sales, because they can be time-consuming and technically difficult to complete. The overall estimated number of abandoned foreclosures nationwide is very small. However, the communities in which they are concentrated often experience significant negative impacts, including producing vacant homes that can be vandalized, reduce surrounding neighborhood property values, and burden local government with the costs associated with maintenance, rehabilitation, or demolition. Given the large number of homeowners experiencing problems in paying their mortgages and the negative impacts on communities when properties become vacant, avoiding additional abandoned foreclosures would help reduce any further potential problems that another vacant and uncared for property can create for communities already struggling with the impacts of the current mortgage crisis. Various servicer practices appear to be contributing to the potential for additional abandoned foreclosures. First, no requirement currently exists for mortgage servicers to notify borrowers facing foreclosure of their right to continue to occupy their properties during this process or of their responsibilities to pay taxes and maintain their properties until any sale or other title transfer activity occurs, and regulatory officials told us that they were not sure they had the authority to require servicers to do so. The lack of awareness among borrowers about their rights and responsibilities contributes to the problems associated with abandoned foreclosures. With such information, more borrowers might not abandon their homes, reducing the problems that vacancies create for neighborhoods, their surrounding communities, and the local government of the community in which the property exists. Second, no requirement exists for servicers to notify the affected local government if they abandon a foreclosure. Without such notices, local government officials often are unaware of properties that are now at greater risk of damage and create potential problems for the surrounding neighborhood. With such information, local governments could move more quickly to identify actions that could ensure that such properties are moved to more productive uses. Third, servicers are not always obtaining updated property value information that consider local conditions that can affect property values when initiating foreclosure. As a result, the likelihood that servicers may initiate foreclosure only to later abandon it after learning that the likely proceeds from the sale of the property would not cover their costs is increased. If servicers had more complete and accurate information on lower-value properties that were more at risk for such declines in value, they may determine that foreclosure is not warranted prior to initiating the process for some properties. Having servicers improve the information they use before initiating a foreclosure could result in fewer vacant properties that cause problems for communities. To help homeowners, neighborhoods, and communities address the negative effects of abandoned foreclosures, we recommend that the Comptroller of the Currency and the Chairman of the Board of Governors of the Federal Reserve System take the following four actions: require that the mortgage servicers they oversee notify borrowers when they decide to charge off loans in lieu of foreclosure and inform borrowers about their rights to occupy their properties until a sale or other title transfer action occurs, responsibilities to maintain their properties, and their continuing obligation to pay their debt and taxes owed; require that the mortgage servicers they oversee notify local authorities, such as tax authorities, courts, or code enforcement departments, when they decide to charge off a loan in lieu of foreclosure; and require that the mortgage servicers they oversee obtain updated property valuations in advance of initiating foreclosure in areas associated with high concentrations of abandoned foreclosures. As part of taking these actions, the Comptroller of the Currency and the Chairman of the Board of Governors of the Federal Reserve System should determine whether any additional authority is necessary and, if so, work with Congress to ensure they have the authority needed to carry out these actions. We requested comments on a draft of this report from the Board of Governors of the Federal Reserve System, Department of Housing and Urban Development, Department of the Treasury, Department of Veterans Affairs, Fannie Mae, Federal Deposit Insurance Corporation, Federal Housing Finance Agency, Freddie Mac, Federal Trade Commission, Office of Comptroller of Currency, Office of Thrift Supervision, and Securities and Exchange Commission. We received technical comments from Federal Reserve, FDIC, FHFA, FTC, OCC, and OTS, which we incorporated where appropriate. The Comptroller of the Currency did not comment on the recommendations addressed to him. We also received written comments from Treasury and the Federal Reserve that are presented in appendices II and III. The Acting Assistant Secretary for Financial Stability at the Department of the Treasury noted that, although the number is small, abandoned foreclosures are a serious problem that underscores the importance of holding servicers accountable. The Director of the Division of Consumer and Community Affairs at the Board of Governors of the Federal Reserve System agreed with our findings but neither agreed nor disagreed with our recommendations. Instead, the Director’s letter described ongoing actions the Federal Reserve is taking to address these issues and noted that the agency is concerned about the effects abandoned foreclosures may have in communities where they are concentrated. In response to our recommendation that the agency require the servicers the Federal Reserve oversees to notify borrowers that their loans are being charged off in lieu of foreclosure, the Director’s letter states they agreed that such notification represents a responsible and prudent business practice and will advise institutions they supervise to notify affected borrowers in the event of abandoned foreclosures. While this would ensure that borrowers are notified in cases where examiners identify instances of abandoned foreclosures, we believe that a more affirmative action by the Federal Reserve to communicate this expectation to all servicers it supervises would be more effective at reducing the impact of abandoned foreclosures on homeowners. Regarding our recommendation that the Federal Reserve require mortgage servicers to notify local authorities when loans are being charged off in lieu of foreclosure, the Consumer and Community Affairs Division Director stated that the Federal Reserve expects servicers to comply with any local laws requiring registration of vacant properties. While this would ensure that local authorities are notified in those communities, we reiterate that the Federal Reserve should take steps to ensure that the servicers it oversees are notifying local authorities that would likely be in a position to take action to mitigate the impact of an abandoned property, such as tax authorities or code enforcement departments, in all areas—not just those with existing vacant property registration systems—to ensure that all communities have such information that could help them better address the potential negative effects of abandoned foreclosures. We also encourage the Federal Reserve, along with other banking regulators with responsibilities to oversee mortgage servicers, to work with Congress to seek any additional authority needed to implement such a requirement. In response to our recommendation that the Federal Reserve require servicers to obtain updated property valuations in advance of initiating foreclosure in certain areas, the Consumer and Community Affairs Division Director letter notes they agree with the importance of servicers having the most up-to-date information before taking such actions, but noted that servicers’ ability to obtain optimal information could be limited. Even without the ability to conduct interior inspections of properties, having servicers take additional steps to improve the accuracy of their valuations prior to initiating foreclosure would still be possible. We acknowledge that updating property valuations can be challenging, which is why our recommendation encourages a risk-based approach to identifying properties where an updated evaluation could assist servicers in making a more well-informed decision about initiating foreclosure. The Director’s letter also cites existing Federal Reserve guidance outlining expectations for obtaining property valuations, which, according to Federal Reserve staff, applies to actions that institutions should take before and after they have acquired properties through foreclosure. According to this guidance, an individual who has appropriate real estate expertise and market knowledge should determine whether an existing property valuation is valid or whether a new valuation should be obtained because of local or property-specific factors including the volatility of the local market, lack of maintenance on the property, or the passage of time, among others. Having the Federal Reserve take further steps to ensure that servicers understand and implement this guidance and evaluate properties in advance of initiating foreclosure would likely help to reduce the prevalence of abandoned foreclosures as well. We are sending copies of this report to interested congressional committees, the Board of Governors of the Federal Reserve System, Federal Deposit Insurance Corporation, Federal Housing Finance Agency, Office of Controller of Currency, Office of Thrift Supervision, Department of Housing and Urban Development, Department of the Treasury, Department of Veterans Affairs, and Securities and Exchange Commission, and other interested parties. The report is also available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VI. This report focuses on the prevalence, causes, and effects of abandoned foreclosures. Specifically, this report addresses (1) the nature and prevalence of abandoned foreclosures, including how they occur; (2) the impact of abandoned foreclosures on communities and state and federal efforts to mitigate the effects of foreclosure; (3) certain practices that may contribute to why mortgage servicers initiate but not complete foreclosures and the extent of federal regulatory oversight of mortgage foreclosure practices; and (4) the various actions some communities are taking to reduce abandoned foreclosures and their impacts. To determine the nature and prevalence of abandoned foreclosures— where servicers initiated but decided not to complete foreclosure and the property is vacant—we analyzed mortgage loan data from January 2008 to March 2010 reported to us from selected servicers and two government- sponsored enterprises (GSE). We obtained aggregated and loan-level data from six servicers—including large servicers and those that specialize in servicing nonprime loans—Fannie Mae, and Freddie Mac on loans that were categorized as charge-offs in lieu of foreclosure (loans that were fully charged off instead of initiating or completing a foreclosure). After eliminating overlapping loans, the institutions contributing data to our sample collectively account for nearly 80 percent of all first-lien mortgages outstanding. The database we have assembled is unique and, therefore, difficult to cross-check with other known sources to check its reliability. Because we were able to cross-check the loan level information provided by the GSEs with official reports submitted by Federal Housing Finance Agency (FHFA) to Congress we believe that these data are sufficiently reliable for our reporting purposes. However, because some of the servicers compiled the information requested differently or were reporting information that is not a part of their normal data collection and retention apparatus, our dataset contains various degrees of inconsistency, missing data and other issues. In reviewing these data we found a number of concerns with some elements of the database and some sources of the data. For example, we believe that some servicers (1) submitted data that included second liens, (2) contained elements that appeared to be irregular or (3) may not have provided the total charge-offs in lieu of foreclosure associated with their servicing portfolio. While the number of potential second liens were not significant especially among those that we identified as abandoned foreclosures, it is difficult to know with certainty how the remaining issues impacted our results including the descriptive statistics report. For this reason, we have characterized our results in a manner that minimizes the reliability concerns and emphasizes the uncertainty regarding the total number of abandoned foreclosures in the United States. Moreover, we conducted a variety of tests on this data. For example, we were able to use GSE data as a reliability check on some elements of the servicer database. We also cross-checked some of the properties in our database against property tax records for a portion of the data for Baltimore and Chicago. We were able to visually inspect some properties in a few cities. Given these and other steps we have taken, we believe the data is sufficiently reliable for the purposes used in this study. We used two methods to code the data as vacant or occupied in our database. First, the servicers provided data on whether the property was vacant at the time the loan was charged off in lieu of foreclosure. We found this data to be reliable based on cross-checks with property tax records and visual inspection for a small sample of the database. However, 32 percent of the field was either blank or the servicer indicated that occupancy status was unknown. Moreover, an occupied property may eventually become vacant weeks or even months after charge-off in lieu of foreclosure. Therefore, we augmented this information by using a second method. The second method involved determining occupancy status using U.S. Postal Service (USPS) administrative data on address vacancies. These data represent the universe of all vacant addresses in the United States. We obtained lists of vacant properties from USPS for 6-month increments from June 30, 2008, through June 30, 2010. The USPS codes a property as vacant if there has been no mail delivery for 90 days. The data also included properties the USPS codes as a “no-stat” for urban areas. A property is considered a “no-stat” if it is under construction, demolished, blighted and otherwise identified by a carrier as not likely to become active for some time. We matched these USPS data on address vacancies to actual addresses in our loan database. Therefore, we considered a property vacant if it was either coded as vacant at the time of charge-off in lieu of foreclosure by the servicers or was coded as vacant based on the vacancy lists obtained from USPS. Users of the report should note the difficulty in determining vacancy and that our exercise may have resulted an understatement or overstatement of the number of vacant properties in our sample. In particular, determining vacancy by matching to USPS data has limitations including, (1) long lags before vacancy is determined, (2) mail carrier delays in reporting vacancies, (3) coding seasonal and recreational properties as vacant, and (4) matching errors due to differences in address formats or incomplete addresses in the loan file. Due to privacy concerns we were not able to leverage USPS expertise to ensure a higher quality match based on lists that included all known delivery points. As a result, our analysis will miss any property that was demolished upon the determination of vacancy or any property deemed a “no-stat” in rural areas. Because of the 90-day lag in determining vacancy and the fact that we are dealing with properties from 2008 to 2010 largely in major metropolitan areas, this is not likely to have a significant impact on our estimates of vacant properties. It should be noted that the data collected by the USPS are designed to facilitate the delivery of mail rather than make definitive determinations about occupancy status. For example, USPS residential vacancy data do not differentiate between homeowner and rental units nor identify seasonal or recreational units. Once vacancy is determined and the number of abandoned foreclosures is estimated, our projections of the prevalence of abandoned foreclosures in the United States are based on an extrapolation designed to highlight the uncertainty in the results. While we estimated the total number of abandoned foreclosures directly for a large portion of the mortgage market, we simulated the total number based on assumptions about the remaining mortgage loans not covered in our sample. To form estimates of prevalence we conducted several analyses. First, we formed base prevalence estimates using information from the servicer and GSE databases alone. Second, we combined servicer and GSE databases to produce some estimates of prevalence based on information contained in both databases. Third, we made a determination of the possible error rate in determining vacancy through various runs of our matching analysis to USPS data and examining the output. Lastly, we conducted a series of simulations to extrapolate our findings to the 20 percent of the mortgage market not covered in our database and to capture the uncertainty inherent in our data. Although the loans reflected in this report represent servicers that service a large percentage of the overall mortgage industry, they likely do not represent a statistically random sample of all charge-offs in lieu of foreclosure. Rather than assume the large sample can be generalized and produce a point estimate with confidence interval, we simulated the likely number of abandoned foreclosures for the remaining loans under a number of different assumptions about the characteristics of the population. For example, in some runs we assumed a 10 percent error matching rate and that the remaining servicers resemble some combination of the subprime specialty lenders and the large servicers in our sample. In some cases we assumed no error in our matching analysis but formed our estimates eliminating a servicer that raised some concern over the reliability of their data. Lastly, we produced estimates combining elements of both of these sets of assumptions. In extrapolating the findings from our sample we provided a range of estimates that reflect the fact that the characteristics of these loans may differ from the remaining population of mortgages as well as our concerns over data reliability and potential matching error in determining vacancy. We believe these simulations properly characterized the sources and nature of uncertainty in the results. We also acknowledged, throughout the report, cases in which data issues may have affected the results. To supplement this data analysis and to determine the impacts of incomplete foreclosures on communities and homeowners, we conducted case studies and a literature review. We selected 12 locations to provide a range of states with judicial and statutory foreclosure processes, from different regions of the country, and that had variations in local economic circumstances and responses to abandoned foreclosures. Our case study locations were Atlanta, Georgia; Baltimore, Maryland; Buffalo, New York; Chula Vista, California; Chicago, Illinois; Cleveland, Ohio; Detroit, Michigan; Lowell, Massachusetts; and Cape Coral, Fort Myers, Manatee County, and Hillsborough County, Florida. We conducted in person site visits or phone calls with city and county officials, community development organizations, academic researchers, foreclosure assistance providers, and state banking supervisors in these locations to gain perspectives on the impact and prevalence in each location. Although we selected the case study locations to provide broad representation of conditions geographically and by type of foreclosure process, these locations may not necessarily be representative of all localities nationwide. As a result, we could not generalize the results of our analysis to all states and localities. In two of the locations we visited, officials provided us with pictures and examples of abandoned foreclosures and vacant properties. In Detroit, Baltimore, and Florida, we visited selected vacant and abandoned properties and took pictures to document property conditions. After the conclusion of our fieldwork, we analyzed the information obtained during the interviews to find common themes and responses. To supplement our case study interviews, we reviewed various relevant journal articles, reports, law review articles, and other literature on the impacts of vacant and abandoned properties. We consulted with internal methodologists to ensure that any literature we used as support for our findings was methodologically sound. To determine what impacts abandoned foreclosures were having on state foreclosure mitigation efforts, we reviewed the findings and recommendations of several state foreclosure task forces and interviewed staff from a national policy research organization who tracks state foreclosure-related legislation. We also contacted the housing finance agencies in the 10 states that were determined as of March 2010 to have been hardest hit by the foreclosure crisis. These states received funding from the Department of the Treasury through its Housing Finance Agency Innovation Fund for the Hardest Hit Housing Markets (HFA Hardest-Hit Fund), and included Arizona, California, Florida, Michigan, Nevada, North Carolina, Ohio, Oregon, Rhode Island, and South Carolina. To determine what impacts abandoned foreclosures were having on federal foreclosure mitigation efforts, we reviewed current federal foreclosure efforts and obtained information from Neighborhood Stabilization Program (NSP) grantees. The current federal foreclosure efforts we reviewed include the Home Affordable Modification Program (HAMP), Federal Housing Administration HAMP, Veterans Affairs HAMP, Second Lien Modification Program, Home Affordable Refinance Program, Home Affordable Foreclosure Alternatives Program, Housing Finance Agency Innovation Fund for the Hardest-Hit Housing Markets, Hope for Homeowners, Hope Now, Mortgage Forgiveness Debt Relief Act and Debt Cancellation, and the Neighborhood Stabilization Program. In conjunction with a separate GAO review of the first phase of the Neighborhood Stabilization Program (NSP 1), we interviewed officials from 12 of the 309 NSP 1 grantees that were selected based on factors including the magnitude of the foreclosure problem in their area, geographic location, and progress made in implementing the program. The grantees were Orange County, Lee County, and City of Tampa (Florida); State of Nevada, Clark County, City of Las Vegas, City of North Las Vegas, and City of Henderson (Nevada); State of Indiana, City of Indianapolis, and City of Fort Wayne (Indiana); and City of Dayton (Ohio). Additionally, we worked with a national nonprofit organization to obtain written responses to structured questions on the extent to which abandoned foreclosures have impacted their efforts to acquire properties from an additional 25 NSP 1 and NSP 2 grantees and subrecipients from across the country. These grantees may not necessarily be representative of the all grantees. As a result, we could not generalize the results of our analysis to all NSP grantees. To identify the reasons financial institutions decide to not complete foreclosures, we interviewed six servicers, including some of the largest and those that specialize in subprime loans. These servicers represented 56 percent of all mortgages outstanding. We also analyzed Fannie Mae and Freddie Mac policies and procedures for servicers in handling foreclosures and compared them to other guidance servicers follow, such as pooling and servicing agreements (PSA). We did not do a systematic analysis of a sample of PSAs ourselves, rather we relied on interviews with servicers and academics who research PSAs, relevant literature, and reports to better understand how the terms of PSAs might influence servicers’ decisions to pursue or abandon foreclosure under different circumstances, and how losses associated with delinquency and foreclosure are accounted for. Thus, descriptions contained in this report are the opinions of these academics and authors only about those specific PSAs they provided to us or were discussed in their reports. While there may be things that are similar across PSAs, they are contracts between two parties—the trust and the servicer—and the terms apply to just these parties. We reviewed federal regulatory guidance that covers the examination process for reviewing institutions’ foreclosure and loss reserve process. We also reviewed whether abandoned foreclosures may violate consumer protection laws such as the Fair Debt Collections Practices Act, and the Federal Trade Commission Act (Unfair or Deceptive Acts or Practices). In addition, we interviewed representatives of Board of Governors of the Federal Reserve System, Federal Deposit Insurance Corporation, Office of Controller of Currency, Office of Thrift Supervision, Department of Housing and Urban Development, Department of Veterans Affairs, and Securities and Exchange Commission. To determine what actions have been taken or proposals offered to address abandoned foreclosures, we reviewed academic literature and interviewed academics, representatives of nonprofit organizations, local, state, and federal officials, and other industry participants. We also obtained information about the advantages and disadvantages of these actions through our literature review and interviews. We summarized these potential actions and conducted a content analysis of interviewee viewpoints on their advantages and disadvantages. We conducted this performance audit from December 2009 through November 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. As part of our assessment of how abandoned foreclosures (properties on which a foreclosure has been initiated but not completed and are vacant) might affect federal foreclosure-related programs, we reviewed several current programs and their eligibility requirements. Most programs listed below were designed to help homeowners avoid foreclosure and require that those who receive assistance be owner-occupants of their homes. The following information appears as interactive content in the body of the report when viewed electronically. The content associated with various states on the map describes housing market conditions that likely explain the elevated levels of abandoned foreclosures in three different groups of states. The content appears in print form below. This categorization is based in part on judgment and trends in the data for the MSAs with the most abandoned foreclosures in these states. Because other researchers may posit alternative categorizations that may also fit the data and other types of abandoned foreclosure exist, this analysis should not be considered definitive. In addition to the contact named above, Cody Goebel (Assistant Director); Emily Chalmers; William R. Chatlos; Kate Bittinger Eikel; Lawrance Evans, Jr.; Simon Galed; Jeff R. Jensen; Matthew McHale; Courtney LaFountain; Tim Mooney; Marc Molino; Jill Naamane; Rhiannon Patterson; Linda Rego; Jeff Tessin; and Jim Vitarello made key contributions to this report.
Entities responsible for managing home mortgage loans--called servicers--may initiate foreclosure proceedings on certain delinquent loans but then decide to not complete the process. Many of these properties are vacant. These abandoned foreclosure--or "bank walkaway"--properties can exacerbate neighborhood decline and complicate federal stabilization efforts. GAO was asked to assess (1) the nature and prevalence of abandoned foreclosures, (2) their impact on communities, (3) practices that may lead servicers to initiate but not complete foreclosures and regulatory oversight of foreclosure practices, and (4) actions some communities have taken to reduce abandoned foreclosures and their impacts. GAO analyzed servicer loan data from January 2008 through March 2010 and conducted case studies in 12 cities. GAO also interviewed representatives of federal agencies, state and local officials, nonprofit organizations, and six servicers, among others, and reviewed federal banking regulations and exam guidance. Among other things, GAO recommends that the Federal Reserve and Office of the Comptroller of the Currency (OCC) require servicers they oversee to notify borrowers and communities when foreclosures are halted and to obtain updated valuations for selected properties before initiating foreclosure. The Federal Reserve neither agreed nor disagreed with these recommendations. OCC did not comment on the recommendations. Using data from large and subprime servicers and government-sponsored mortgage entities representing nearly 80 percent of mortgages, GAO estimated that abandoned foreclosures are rare--representing less than 1 percent of vacant homes between January 2008 and March 2010. GAO also found that, while abandoned foreclosures have occurred across the country, they tend to be concentrated in economically distressed areas. Twenty areas account for 61 percent of the estimated cases, with certain cities in Michigan, Ohio, and Florida experiencing the most. GAO also found that abandoned foreclosures most frequently involved loans to borrowers with lower quality credit--nonprime loans--and low-value properties in economically distressed areas. Although abandoned foreclosures occur infrequently, the areas in which they were concentrated are significantly affected. Vacant homes associated with abandoned foreclosures can contribute to increased crime and decreased neighborhood property values. Abandoned foreclosures also increase costs for local governments that must maintain or demolish vacant properties. Because servicers are not required to notify borrowers and communities when they decide to abandon a foreclosure, homeowners are sometimes unaware that they still own the home and are responsible for paying the debt and taxes and maintaining the property. Communities are also delayed in taking action to mitigate the effects of a vacant property. Servicers typically abandon a foreclosure when they determine that the cost to complete the foreclosure exceeds the anticipated proceeds from the property's sale. However, GAO found that most of the servicers interviewed were not always obtaining updated property valuations before initiating foreclosure. Fewer abandoned foreclosures would likely occur if servicers were required to obtain updated valuations for lower-value properties or those in areas that were more likely to experience large declines in value. Because they generally focus on the areas with greatest risk to the institutions they supervise, federal banking regulators had not generally examined servicers' foreclosure practices, such as whether foreclosures are completed; however, given the ongoing mortgage crisis, they have recently placed greater emphasis on these areas. GAO identified various actions that local governments or others are taking to reduce the likelihood or mitigate the impacts of abandoned foreclosures. For example, community groups indicated increased counseling could prevent some borrowers from vacating their homes too early. Some communities are requiring servicers to list properties that become vacant properties on a centralized registry as a way to identify properties that could require increased attention. In addition, by creating entities called land banks that can acquire properties from servicers that they otherwise cannot sell, some communities have provided increased incentives for services to complete instead of abandon foreclosures. However, these actions can require additional funding, have unintended consequences, such as potentially encouraging servicers to walk away from properties before initiating foreclosure, and may not be appropriate for all communities.
FOIA establishes a legal right of access to government records and information, on the basis of the principles of openness and accountability in government. Before the act (originally enacted in 1966), an individual seeking access to federal records had faced the burden of establishing a right to examine them. FOIA established a “right to know” standard for access, instead of a “need to know,” and shifted the burden of proof from the individual to the government agency seeking to deny access. FOIA provides the public with access to government information either through “affirmative agency disclosure”—publishing information in the Federal Register or the Internet, or making it available in reading rooms—or in response to public requests for disclosure. Public requests for disclosure of records are the best known type of FOIA disclosure. Any member of the public may request access to information held by federal agencies, without showing a need or reason for seeking the information. Not all information held by the government is subject to FOIA. The act prescribes nine specific categories of information that are exempt from disclosure: for example, trade secrets and certain privileged commercial or financial information, certain personnel and medical files, and certain law enforcement records or information (attachment II provides the complete list). In denying access to material, agencies may cite these exemptions. The act requires agencies to notify requesters of the reasons for any adverse determination (that is, a determination not to provide records) and grants requesters the right to appeal agency decisions to deny access. In addition, agencies are required to meet certain time frames for making key determinations: whether to comply with requests (20 business days from receipt of the request), responses to appeals of adverse determinations (20 business days from receipt of the appeal), and whether to provide expedited processing of requests (10 calendar days from receipt of the request). Congress did not establish a statutory deadline for making releasable records available, but instead required agencies to make them available promptly. Although the specific details of processes for handling FOIA requests vary among agencies, the major steps in handling a request are similar across the government. Agencies receive requests, usually in writing (although they may accept requests by telephone or electronically), which can come from any organization or member of the public. Once received, the request goes through several phases, which include initial processing, searching for and retrieving responsive records, preparing responsive records for release, approving the release of the records, and releasing the records to the requester. Figure 1 is an overview of the process, from the receipt of a request to the release of records. During the initial processing phase, a request is logged into the agency’s FOIA system, and a case file is started. The request is then reviewed to determine its scope, estimate fees, and provide an initial response to the requester (in general, this simply acknowledges receipt of the request). After this point, the FOIA staff begins its search to retrieve responsive records. This step may include searching for records from multiple locations and program offices. After potentially responsive records are located, the documents are reviewed to ensure that they are within the scope of the request. During the next two phases, the agency ensures that appropriate information is to be released under the provisions of the act. First, the agency reviews the responsive records to make any redactions based on the statutory exemptions. Once the exemption review is complete, the final set of responsive records is turned over to the FOIA office, which calculates appropriate fees, if applicable. Before release, the redacted responsive records are then given a final review, possibly by the agency’s general counsel, and then a response letter is generated, summarizing the agency’s actions regarding the request. Finally, the responsive records are released to the requester. Some requests are relatively simple to process, such as requests for specific pieces of information that the requester sends directly to the appropriate office. Other requests may require more extensive processing, depending on their complexity, the volume of information involved, the need for the agency FOIA office to work with offices that have relevant subject-matter expertise to find and obtain information, the need for a FOIA officer to review and redact information in the responsive material, the need to communicate with the requester about the scope of the request, and the need to communicate with the requester about the fees that will be charged for fulfilling the request (or whether fees will be waived). Specific details of agency processes for handling requests vary, depending on the agency’s organizational structure and the complexity of the requests received. While some agencies centralize processing in one main office, other agencies have separate FOIA offices for each agency component and field office. Agencies also vary in how they allow requests to be made. Depending on the agency, requesters can submit requests by telephone, fax, letter, or e-mail or through the Web. In addition, agencies may process requests in two ways, known as “multitrack” and “single track.” Multitrack processing involves dividing requests into two groups: (1) simple requests requiring relatively minimal review, which are placed in one processing track, and (2) more voluminous and complex requests, which are placed in another track. In contrast, single-track processing does not distinguish between simple and complex requests. With single-track processing, agencies process all requests on a first-in/first-out basis. Agencies can also process FOIA requests on an expedited basis when a requester has shown a compelling need or urgency for the information. As agencies process FOIA requests, they generally place them in one of four possible disposition categories: grants, partial grants, denials, and “not disclosed for other reasons.” These categories are defined as follows: Grants: Agency decisions to disclose all requested records in full. Partial grants: Agency decisions to withhold some records in whole or in part, because such information was determined to fall within one or more exemptions. Denials: Agency decisions not to release any part of the requested records because all information in the records is determined to be exempt under one or more statutory exemptions. Not disclosed for other reasons: Agency decisions not to release requested information for any of a variety of reasons other than statutory exemptions from disclosing records. The categories and definitions of these “other” reasons for nondisclosure are shown in table 1. When a FOIA request is denied in full or in part, or the requested records are not disclosed for other reasons, the requester is entitled to be told the reason for the denial, to appeal the denial, and to challenge it in court. In addition to FOIA, the Privacy Act of 1974 includes provisions granting individuals the right to gain access to and correct information about themselves held by federal agencies. Thus the Privacy Act serves as a second major legal basis, in addition to FOIA, for the public to use in obtaining government information. The Privacy Act also places limitations on agencies’ collection, disclosure, and use of personal information. Although the two laws differ in scope, procedures in both FOIA and the Privacy Act permit individuals to seek access to records about themselves—known as “first-party” access. Depending on the individual circumstances, one law may allow broader access or more extensive procedural rights than the other, or access may be denied under one act and allowed under the other. Consequently, the Department of Justice’s Office of Information and Privacy issued guidance that it is “good policy for agencies to treat all first-party access requests as FOIA requests (as well as possibly Privacy Act requests), regardless of whether the FOIA is cited in a requester’s letter.” This guidance was intended to help ensure that requesters receive the fullest possible response to their inquiries, regardless of which law they cite. In addition, Justice guidance for the annual FOIA report directs agencies to include Privacy Act requests (that is, first-party requests) in the statistics reported. According to the guidance, “A Privacy Act request is a request for records concerning oneself; such requests are also treated as FOIA requests. (All requests for access to records, regardless of which law is cited by the requester, are included in this report.)” Although FOIA and the Privacy Act can both apply to first-party requests, these may not always be processed in the same way as described earlier for FOIA requests. In some cases, little review and redaction (see fig. 1) is required, for example, for a request for one’s own Social Security benefits records. In contrast, various degrees of review and redaction could be required for other types of first-party requests: for example, files on security background checks would need review and redaction before being provided to the person who was the subject of the investigation. OMB and the Department of Justice both have roles in the implementation of FOIA. Under various statutes, including the Paperwork Reduction Act, OMB exercises broad authority for coordinating and administering various aspects of governmentwide information policy. FOIA specifically requires OMB to issue guidelines to “provide for a uniform schedule of fees for all agencies.” OMB issued this guidance in April 1987. The Department of Justice oversees agencies’ compliance with FOIA and is the primary source of policy guidance for agencies. Specifically, Justice’s requirements under the act are to make agencies’ annual FOIA reports available through a single electronic access point and notify Congress as to their availability; in consultation with OMB, develop guidelines for the required annual agency reports, so that all reports use common terminology and follow a similar format; and submit an annual report on FOIA litigation and the efforts undertaken by Justice to encourage agency compliance. Within the Department of Justice, the Office of Information and Privacy has lead responsibility for providing guidance and support to federal agencies on FOIA issues. This office first issued guidelines for agency preparation and submission of annual reports in the spring of 1997. It also periodically issues additional guidance on annual reports as well as on compliance, provides training, and maintains a counselors service to provide expert, one-on-one assistance to agency FOIA staff. Further, the Office of Information and Privacy also makes a variety of FOIA and Privacy Act resources available to agencies and the public via the Justice Web site and on- line bulletins (available at www.usdoj.gov/oip/index.html). In 1996, the Congress amended FOIA to provide for public access to information in an electronic format (among other purposes). These amendments, referred to as e-FOIA, also required that agencies submit a report to the Attorney General on or before February 1 of each year that covers the preceding fiscal year and includes information about agencies’ FOIA operations. The following are examples of information that is to be included in these reports: number of requests received, processed, and pending; median number of days taken by the agency to process different types of requests; determinations made by the agency not to disclose information and the reasons for not disclosing the information; disposition of administrative appeals by requesters; information on the costs associated with handling of FOIA requests; and full-time-equivalent staffing information. In addition to providing their annual reports to the Attorney General, agencies are to make them available to the public in electronic form. The Attorney General is required to make all agency reports available on line at a single electronic access point and report to Congress no later than April 1 of each year that these reports are available in electronic form. (This electronic access point is www.usdoj.gov/oip/04_6.html.) In 2001, in response to a congressional request, we prepared the first in a series of reports on the implementation of the 1996 amendments to FOIA, starting from fiscal year 1999. In these reviews, we examined the contents of the annual reports for 25 major agencies (shown in table 2). They include the 24 major agencies covered by the Chief Financial Officers Act, as well as the Central Intelligence Agency and, until 2003, the Federal Emergency Management Agency (FEMA). In 2003, the creation of the Department of Homeland Security (DHS), which incorporated FEMA, led to a shift in some FOIA requests from agencies affected by the creation of the new department, but the same major component entities are reflected in all the years reviewed. Our previous reports included descriptions of the status of reported FOIA implementation, including any trends revealed by comparison with earlier years. We noted general increases in requests received and processed, as well as growing numbers of pending requests carried over from year to year. In addition, our 2001 report disclosed that data quality issues limited the usefulness of agencies’ annual FOIA reports and that agencies had not provided online access to all the information required by the act as amended in 1996. We therefore recommended that the Attorney General direct the Department of Justice to improve the reliability of data in the agencies’ annual reports by providing guidance addressing the data quality issues we identified and by reviewing agencies’ report data for completeness and consistency. We further recommended that the Attorney General direct the department to enhance the public’s access to government records and information by encouraging agencies to make all required materials available electronically. In response, the Department of Justice issued supplemental guidance, addressed reporting requirements in its training programs, and continued reviewing agencies’ annual reports for data quality. Justice also worked with agencies to improve the quality of data in FOIA annual reports. On December 14, 2005, the President issued an Executive Order setting forth a policy of citizen-centered and results-oriented FOIA administration. Briefly, FOIA requesters are to receive courteous and appropriate services, including ways to learn about the status of their requests and the agency’s response, and agencies are to provide ways for requesters and the public to learn about the FOIA process and publicly available agency records (such as those on Web sites). In addition, agency FOIA operations are to be results oriented: agencies are to process requests efficiently, achieve measurable improvements in FOIA processing, and reform programs that do not produce appropriate results. To carry out this policy, the order required, among other things, that agency heads designate Chief FOIA Officers to oversee their FOIA programs, and that agencies establish Requester Service Centers and Public Liaisons to ensure appropriate communication with requesters. The Chief FOIA Officers were directed to conduct reviews of the agencies’ FOIA operations and develop improvement plans to ensure that FOIA administration was in accordance with applicable law as well as with the policy set forth in the order. By June 2006, agencies were to submit reports that included the results of their reviews and copies of their improvement plans. The order also instructed the Attorney General to issue guidance on implementation of the order’s requirements for agencies to conduct reviews and develop plans. Finally, the order instructed agencies to report on their progress in implementing their plans and meeting milestones as part of their annual reports for fiscal years 2006 and 2007, and required agencies to account for any milestones missed. In April 2006, the Department of Justice posted guidance on implementation of the order’s requirements for FOIA reviews and improvement plans. This guidance suggested a number of areas of FOIA administration that agencies might consider in conducting their reviews and developing improvement plans. (Examples of some of these areas are automated tracking capabilities, automated processing, receiving/responding to requests electronically, forms of communication with requesters, and systems for handling referrals to other agencies.) To encourage consistency, the guidance also included a template for agencies to use to structure the plans and to report on their reviews and plans. The improvement plans are posted on the Justice Web site at www.usdoj.gov/oip/agency_improvement.html. In a July 2006 testimony, we provided preliminary results of our analyses of the improvement plans for the 25 agencies in our review that were submitted as of the end of June; in our testimony we focused on how the plans addressed reducing or eliminating backlog. We testified that a substantial number of plans did not include measurable goals and timetables that would allow agencies to measure and evaluate the success of their plans. Several of the plans were revised in light of our testimony, as well as in response to feedback to agencies from the Department of Justice in its FOIA oversight role. The data reported by 24 major agencies in annual FOIA reports from 2002 to 2005 reveal a number of general trends. (Data from USDA are omitted from our statistical analysis, because we determined that data from a major USDA component were not reliable.) For example, the public continued to submit more requests for information from the federal government through FOIA, but many agencies, despite increasing the numbers of requests processed, did not keep pace with this increased volume. As a result, the number of pending requests carried over from year to year has been steadily increasing. However, our ability to make generalizations about processing time is limited by the type of statistic reported (that is, the median). Taking steps to improve the accuracy and form of annual report data could provide more insight into FOIA processing. We omitted data from USDA’s annual FOIA report because we determined that not all these data were reliable. Although some USDA components expressed confidence in their data, one component, the Farm Service Agency, did not. According to this agency’s FOIA Officer, portions of the agency’s data in annual reports were not accurate or complete. This is a significant deficiency, because the Farm Service Agency reportedly processes over 80 percent of the department’s total FOIA requests. Currently, FOIA processing for the Farm Service Agency is highly decentralized, taking place in staff offices in Washington, D.C., and Kansas City, 50 state offices, and about 2,350 county offices. The agency FOIA officer told us that she questioned the completeness and accuracy of data supplied by the county offices. This official stated that some of the field office data supplied for the annual report were clearly wrong, leading her to question the systems used to record workload data at field offices and the field office staff’s understanding of FOIA requirements. She attributed this condition to the agency’s decentralized organization and to lack of management attention, resources, and training. Lacking accurate data hinders the Farm Service Agency from effectively monitoring and managing its FOIA program. The Executive Order’s requirement to develop an improvement plan provides an opportunity for the Farm Service Agency to address its data reliability problems. More specifically, Justice’s guidance on implementing the Executive Order refers to the need for agencies to explore improvements in their monitoring and tracking systems and staff training. USDA has developed an improvement plan that includes activities to improve FOIA processing at the Farm Service Agency that are relevant to the issues raised by the Farm Service Agency’s FOIA Officer, including both automation and training. The plan sets goals for ensuring that all agency employees who process or retrieve responsive records are trained in the necessary FOIA duties, as well as for determining the type of automated tracking to be implemented. According to the plan, an electronic tracking system is needed to track requests, handle public inquiries regarding request status, and prepare a more accurate annual FOIA report. In addition, the Farm Service Agency plans to determine the benefit of increased centralization of FOIA request processing. However, the plan does not directly address improvements to data reliability. If USDA does not also plan for activities, measures, and milestones to improve data reliability, it increases the risk that the Farm Service Agency will not produce reliable FOIA statistics, which are important for program oversight and meeting the act’s goal of providing visibility into government FOIA operations. The numbers of FOIA requests received and processed continue to rise, but except for one case—SSA—the rate of increase has flattened in recent years. For SSA, we present statistics separately because the agency reported an additional 16 million requests in 2005, dwarfing those for all other agencies combined, which together total about 2.6 million. SSA attributed this rise to an improvement in its method of counting requests and stated that in previous years, these requests were undercounted. Further, all but about 38,000 of SSA’s over 17 million requests are simple requests for personal information by or on behalf of individuals. Figure 2 shows total requests reported governmentwide for fiscal years 2002 through 2005, with SSA’s share shown separately. This figure shows the magnitude of SSA’s contribution to the whole FOIA picture, as well as the scale of the jump from 2004 to 2005. Figure 3 presents statistics omitting SSA on a scale that allows a clearer view of the rate of increase in FOIA requests received and processed in the rest of the government. As this figure shows, when SSA’s numbers are excluded, the rate of increase is modest and has been flattening: For the whole period (fiscal years 2002 to 2005), requests received increased by about 29 percent, and requests processed increased by about 27 percent. Most of this rise occurred from fiscal years 2002 to 2003: about 28 percent for requests received, and about 27 percent for requests processed. In contrast, from fiscal year 2004 to 2005, the rise was much less: about 3 percent for requests received, and about 2 percent for requests processed. According to SSA, the increases that the agency reported in fiscal year 2005 can be attributed to an improvement in its method of counting a category of requests it calls “simple requests handled by non-FOIA staff.” From fiscal year 2002 to 2005, SSA’s FOIA reports have consistently shown significant growth in this category, which has accounted for the major portion of all SSA requests reported (see table 3). In each of these years, SSA has attributed the increases in this category largely to better reporting, as well as actual increases in requests. SSA describes requests in this category as typically being requests by individuals for access to their own records, as well as requests in which individuals consent for SSA to supply information about themselves to third parties (such as insurance and mortgage companies) so that they can receive housing assistance, mortgages, disability insurance, and so on. According to SSA’s FOIA report, these requests are handled by personnel in about 1,500 locations in SSA, including field and district offices and teleservice centers. Such requests are almost always granted, according to SSA, and most receive immediate responses. SSA has stated that it does not keep processing statistics (such as median days to process) on these requests, which it reports separately from other FOIA requests (for which processing statistics are kept). However, officials say that these are typically processed in a day or less. According to SSA officials, they included information on these requests in their annual reports because Justice guidance instructs agencies to treat Privacy Act requests (requests for records concerning oneself) as FOIA requests and report them in their annual reports. In addition, SSA officials said that their automated systems make it straightforward to capture and report on these simple requests. According to SSA, in fiscal year 2005, the agency began to use automated systems to capture the numbers of requests processed by non-FOIA staff, generating statistics automatically as requests were processed; the result, according to SSA, is a much more accurate count. Besides SSA, agencies reporting large numbers of requests received were the Departments of Defense, Health and Human Services, Homeland Security, Justice, the Treasury, and Veterans Affairs, as shown in table 4. The rest of agencies combined account for only about 5 percent of the total requests received (if SSA’s simple requests handled by non-FOIA staff are excluded). Table 4 presents, in descending order of request totals, the numbers of requests received and percentages of the total (calculated with and without SSA’s statistics on simple requests handled by non-FOIA staff). Most FOIA requests in 2005 were granted in full, with relatively few being partially granted, denied, or not disclosed for other reasons (statistics are shown in table 5). This generalization holds with or without SSA’s inclusion. The percentage of requests granted in full was about 87 percent, which is about the same as in previous years. However, if SSA’s numbers are included, the proportion of grants dominates the other categories—raising this number from 87 percent of the total to 98 percent. This is to be expected, since SSA reports that it grants the great majority of its simple requests handled by non-FOIA staff, which make up the bulk of SSA’s statistics. Three of the seven agencies that handled the largest numbers of requests (HHS, SSA, and VA; see table 4) also granted the largest percentages of requests in full, as shown in figure 4. Figure 4 shows, by agency, the disposition of requests processed: that is, whether granted in full, partially granted, denied, or “not disclosed for other reasons” (see table 1 for a list of these reasons). As the figure shows, the numbers of fully granted requests varied widely among agencies in fiscal year 2005. Six agencies made full grants of requested records in over 80 percent of the cases they processed (besides the three already mentioned, these include Energy, OPM, and SBA). In contrast, 13 of 24 made full grants of requested records in less than 40 percent of their cases, including 3 agencies (CIA, NSF, and State) that made full grants in less than 20 percent of cases processed. This variance among agencies in the disposition of requests has been evident in prior years as well. In many cases, the variance can be accounted for by the types of requests that different agencies process. For example, as discussed earlier, SSA grants a very high proportion of requests because they are requests for personal information about individuals that are routinely made available to or for the individuals concerned. Similarly, VA routinely makes medical records available to individual veterans, and HHS also handles large numbers of Privacy Act requests. Such requests are generally granted in full. Other agencies, on the other hand, receive numerous requests whose responses must routinely be redacted. For example, NSF reported in its annual report that most of its requests (an estimated 90 percent) are for copies of funded grant proposals. The responsive documents are routinely redacted to remove personal information on individual principal investigators (such as salaries, home addresses, and so on), which results in high numbers of “partial grants” compared to “full grants.” For 2005, the reported time required to process requests (by track) varied considerably among agencies. Table 6 presents data on median processing times for fiscal year 2005. For agencies that reported processing times by component rather than for the agency as a whole, the table indicates the range of median times reported by the agency’s components. As the table shows, seven agencies had components that reported processing simple requests in less than 10 days (these components are parts of the CIA, Energy, the Interior, Justice, Labor, Transportation, and the Treasury); for each of these agencies, the lower value of the reported ranges is less than 10. On the other hand, median time to process simple requests is relatively long at some organizations (for example, components of Energy and Justice, as shown by median ranges whose upper end values are greater than 100 days). For complex requests, the picture is similarly mixed. Components of four agencies (EPA, DHS, the Treasury, and VA) reported processing complex requests quickly—with a median of less than 10 days. In contrast, other components of several agencies (DHS, Energy, EPA, HHS, HUD, Justice, State, Transportation, and the Treasury) reported relatively long median times to process complex requests, with median days greater than 100. Six agencies (AID, HHS, NSF, OPM, SBA, and SSA) reported using single-track processing. The median processing times for single- track processing varied from 5 days (at an HHS component) to 173 days (at another HHS component). Our ability to make further generalizations about FOIA processing times is limited by the fact that, as required by the act, agencies report median processing times only and not, for example, arithmetic means (the usual meaning of “average” in everyday language). To find an arithmetic mean, one adds all the members of a list of numbers and divides the result by the number of items in the list. To find the median, one arranges all the values in the list from lowest to highest and finds the middle one (or the average of the middle two if there is no one middle number). Thus, although using medians provides representative numbers that are not skewed by a few outliers, they cannot be summed. Deriving a median for two sets of numbers, for example, requires knowing all numbers in both sets. Only the source data for the medians can be used to derive a new median, not the medians themselves. As a result, with only medians it is not statistically possible to combine results from different agencies to develop broader generalizations, such as a governmentwide statistic based on all agency reports, statistics from sets of comparable agencies, or an agencywide statistic based on separate reports from all components of the agency. In rewriting the FOIA reporting requirements in 1996, legislators declared an interest in making them “more useful to the public and to Congress, and the information in them more accessible.” However, the limitation on aggregating data imposed by the use of medians alone impedes the development of broader pictures of FOIA operations. A more complete picture would be given by the inclusion of other statistics based on the same data that are used to derive medians, such as means and ranges. Providing means along with the median would allow more generalizations to be drawn, and providing ranges would complete the picture by adding information on the outliers in agency statistics. More complete information would be useful for public accountability and for effectively managing agency FOIA programs, as well as for meeting the act’s goal of providing visibility into government FOIA operations. In addition to processing greater numbers of requests, many agencies (10 of 24) also reported that their numbers of pending cases—requests carried over from one year to the next—have increased since 2002. In 2002, pending requests governmentwide were reported to number about 138,000, whereas in 2005, about 200,000—45 percent more—were reported. In addition, the rate of increase grew in fiscal year 2005, rising 24 percent from fiscal year 2004, compared to 13 percent from 2003 to 2004. Figure 5 shows these results, illustrating the accelerating rate at which pending cases have been increasing. These statistics include pending cases reported by SSA, because SSA’s pending cases do not include simple requests handled by non- FOIA staff (for which SSA does not track pending cases). As the figure shows, these pending cases do not change the governmentwide picture significantly. Trends for individual agencies show mixed progress in reducing the number of pending requests reported from 2002 to 2005—some agencies have decreased numbers of pending cases, while others’ numbers have increased. Figure 6 shows processing rates at the 24 agencies (that is, the number of requests that an agency processes relative to the number it receives). Eight of the 24 agencies (AID, DHS, the Interior, Education, HHS, HUD, NSF, and OPM) reported processing fewer requests than they received each year for fiscal years 2003, 2004, and 2005; 8 additional agencies processed less than they received in two of these three years (Defense, Justice, Transportation, GSA, NASA, NRC, SSA, and VA). In contrast, two agencies (CIA and Energy) had processing rates above 100 percent in all 3 years, meaning that each made continued progress in reducing their numbers of pending cases. Fourteen additional agencies were able to make at least a small reduction in their numbers of pending requests in 1 or more years between fiscal years 2003 and 2005. Legislators noted in 1996 that the FOIA reporting requirements were rewritten “to make them more useful to the public and to Congress, and to make the information in them more accessible.” The Congress also gave the Department of Justice the responsibility to provide policy guidance and oversee agencies’ compliance with FOIA. In its oversight and guidance role, Justice’s Office of Information and Privacy (OIP) created summaries of the annual FOIA reports and made these available through its FOIA Post Web page (www.usdoj.gov/oip/foiapost/mainpage.htm). In 2003, Justice described its summary as “a major guidance tool.” It pointed out that although it was not required to do so under the law, the office had initiated the practice of compiling aggregate summaries of all agencies’ annual FOIA report data as soon as these were filed by all agencies. These summaries did not contain aggregated statistical tables, but they did provide prose descriptions that included statistics on major governmentwide results. However, the most recent of these summaries is for fiscal year 2003. According to the Acting Director of OIP, she was not certain why such summaries had not been made available since then. According to this official, internally the agency found the summaries useful and was considering making them available again. She also stated that these summaries gave a good overall picture of governmentwide processing. Aggregating and summarizing the information in the annual reports serves to maximize their usefulness and accessibility, in accordance with congressional intent, as well as potentially providing Justice with insight into FOIA implementation governmentwide and valuable benchmarks for use in overseeing the FOIA program. Such information would also be valuable for others interested in gauging governmentwide performance. The absence of such summaries reduces the ability of the public and the Congress to consistently obtain a governmentwide picture of FOIA processing. In providing agency views for this testimony, the Acting Director of OIP told us that the department would resume providing summaries, and that these would generally be available by the summer following the issuance of the annual reports. As required by the Executive Order, all the 25 agencies submitted improvement plans based on the results of reviews of their respective FOIA operations, as well as on the areas emphasized by the order. The plans generally addressed these four areas, with 20 of 25 plans addressing all four. In particular, for all but 2 agencies with reported backlog, plans included both measurable goals and timetables for backlog reduction. Further, to increase reliance on dissemination, improve communications on the status of requests, and increase public awareness of FOIA processing, agencies generally set milestones to accomplish activities promoting these aims. In some cases, agencies did not set goals for a given area because they determined that they were already strong in that area. The Executive Order states that improvement plans shall include “specific activities that the agency will implement to eliminate or reduce the agency’s FOIA backlog, including (as applicable) changes that will make the processing of FOIA requests more streamlined and effective.” It further states that plans were to include “concrete milestones, with specific timetables and outcomes to be achieved,” to allow the plan’s success to be measured and evaluated. In addition, the Justice guidance suggested a number of process improvement areas for agencies to consider, such as receiving or responding to requests electronically, automated FOIA processing, automated tracking capabilities, and multitrack processing. It also gave agencies considerable leeway in choosing “means of measurement of success” for improving timeliness and thus reducing backlog. All agency plans discussed avoiding or reducing backlog, and most (22 out of 25) established measurable goals and timetables for this area of focus. One agency, SBA, reported that it had no backlog, so it set no goals. A second agency, NSF, set no specific numerical goals for backlog reduction, but in fiscal year 2005 its backlog was minimal, and its median processing time was 14.26 days. In addition, its plan includes activities to increase efficiency and to monitor and analyze backlogged requests to determine whether systemic changes are warranted in its processes. A third agency, HUD, set a measurable goal for reducing backlog, but did not include a date by which it planned to achieve this goal. However, it achieved this goal, according to agency officials, by November 2006. carefully determine which ones best fit their individual circumstances, which can vary greatly from one agency to another.” in the number of pending FOIA cases that were over 1 year old. NRC chose to focus on improving processing times, setting percentage goals for completion of different types of requests (for example, completing 75 percent of simple requests within 20 days). Labor’s plan sets goals that aim for larger percentages of reduction for the oldest categories of pending requests (75 percent reduction for the oldest, 50 percent reduction for the next oldest, and so on). A number of agencies included goals to close their oldest 5 to 10 requests (Justice, the Treasury, Education, Commerce, Defense, GSA, NASA, SSA, and VA). Other agencies planned to eliminate their backlogs (for example, OPM and DHS) or to eliminate fiscal year 2005 backlog (Transportation), and several agencies chose goals based on a percentage of reduction of existing backlog (for example, CIA, Commerce, Education, Defense, the Interior, Justice, SSA, the Treasury, and USDA). Some agencies also described plans to perform analyses that would measure their backlogs so that they could then establish the necessary baselines against which to measure progress. In addition to setting backlog targets, agencies also describe activities that contribute to reducing backlog. For example, the Treasury plan, which states that backlog reduction is the main challenge facing the department and the focus of its plan, includes such activities (with associated milestones) as reengineering its multitrack FOIA process, monitoring monthly reports, and establishing a FOIA council. The agency plans thus provide a variety of activities and measures of improvement that should permit agency heads, the Congress, and the public to assess the agencies’ success in implementing their plans to reduce backlog. The Executive Order calls for “increased reliance on the dissemination of records that can be made available to the public” without the necessity of a FOIA request, such as through posting on Web sites. In its guidance, Justice notes that agencies are required by FOIA to post frequently requested records, policy statements, staff manuals and instructions to staff, and final agency opinions. It encourages agencies not only to review their activities to meet this requirement, but also to make other public information available that might reduce the need to make FOIA requests. It also suggests that agencies consider improving FOIA Web sites to ensure that they are user friendly and up to date. Agency plans generally established goals and timetables for increasing reliance on public dissemination of records, including through Web sites. Of 25 agencies, 24 included plans to revise agency Web sites and add information to them, and 12 of these are making additional efforts to ensure that frequently requested documents are posted on their Web sites. For example, Defense is planning to increase the number of its components that have Web sites as well as posting frequently requested documents. Interior is planning to facilitate the posting of frequently requested documents by using scanning and redaction equipment to make electronic versions readily available. Agencies planned other related activities, such as making posted documents easier to find, improving navigation, and adding other helpful information. For example, AID plans to establish an “information/searching decision tree” to assist Web site visitors by directing them to agency public affairs staff who may be able to locate information and avoid the need for visitors to file FOIA requests. HUD plans activities to anticipate topics that may produce numerous FOIA requests (“hot button” issues) and post relevant documents. Education is planning to use its automated tracking technology to determine when it is receiving multiple requests for similar information and then post such information on its Web site. The Treasury plan does not address increasing public dissemination of records. The Treasury’s plan, as mentioned earlier, is focused on backlog reduction. It does not mention the other areas emphasized in the Executive Order, list them among the areas it selected for review, or explain the decision to omit them from the review and plan. Treasury officials told us that they concentrated in their plan on areas where they determined the department had a deficiency: namely, a backlog consisting of numerous requests, some of which were very old (dating as far back as 1991). By comparison, they did not consider they had deficiencies in the other areas. They also stated that neither Justice nor OMB had suggested that they revise the plan to include these areas. With regard to dissemination, they told us that they did not consider increasing dissemination to be mandatory, and they noted that their Web sites currently provide frequently requested records and other public documents, as required by the act. However, without a careful review of the department’s current dissemination practices or a plan to take actions to increase dissemination, the Treasury does not have assurance that it has identified and exploited available opportunities to increase dissemination of records in such a way as to reduce the need for the public to make FOIA requests, as stressed by the Executive Order. The Executive Order sets as policy that agencies shall provide FOIA requesters ways to learn about the status of their FOIA requests and states that agency improvement plans shall ensure that FOIA administration is in accordance with this policy. In its implementation guidance, Justice reiterated the order’s emphasis on providing status information to requesters and discussed the need for agencies to examine, among other things, their capabilities for tracking status and the forms of communication used with requesters. Most agencies (22 of 25) established goals and timetables for improving communications with FOIA requesters about the status of their requests. Goals set by these agencies included planned changes to communications, including sending acknowledgement letters, standardizing letters to requesters, including information on elements of a proper FOIA request in response letters, and posting contact information on Web pages. Other activities included establishing toll free numbers for requesters to obtain status information, acquiring software to allow requesters to track the status of their requests, and holding public forums. Three agencies did not include improvement goals because they considered them unnecessary. In two cases (Defense and EPA), agencies considered that status communications were already an area of strength. Defense considered that it was strong in both customer responsiveness and communications. Defense’s Web site provides instructions for requesters on how to get information about the status of requests, as well as information on Requester Service Centers and Public Liaisons. Officials also told us that this information is included in acknowledgement letters to requesters, and that the department is working to implement an Interactive Customer Collection tool that would enable requesters to provide feedback. Similarly, EPA officials told us that they considered the agency’s activities to communicate with requesters on the status of their requests to be already effective, noting that many of the improvements planned by other agencies were already in effect at EPA. Officials also stated that EPA holds regular FOIA requester forums (the last in November 2006), and that EPA’s requester community had expressed satisfaction with EPA’s responsiveness. EPA’s response to the Executive Order describes its FOIA hotline for requesters and its enterprise FOIA management system, deployed in 2005, that provides “cradle to grave” tracking of incoming requests and responses. The third agency, the Treasury, did not address improving status communications, as its plan is focused on backlog reduction. As required by the Executive Order, the Treasury did set up Requester Service Centers and Public Liaisons, which are among the mechanisms envisioned to improve status communications. However, because the Treasury omitted status communications from the areas of improvement that it selected for review, it is not clear that this area received attention commensurate with the emphasis it was given in the Executive Order. Without attention to communication with requesters, the Treasury increases the risk that its FOIA operations will not be responsive and citizen centered, as envisioned by the Executive Order. The Executive Order states that improvement plans shall include activities to increase public awareness of FOIA processing, including (as appropriate) expanded use of Requester Service Centers and FOIA Public Liaisons, which agencies were required to establish by the order. In Justice’s guidance, it linked this requirement to the FOIA Reference Guide that agencies are required to maintain as an aid to potential FOIA requesters, because such guides can be an effective means for increasing public awareness. Accordingly, the Justice guidance advised agencies to double-check these guides to ensure that they remain comprehensive and up to date. Most agencies (23 of 25) defined goals and timetables for increasing public awareness of FOIA processing, generally including ensuring that FOIA reference guides were up to date. In addition, all 25 agencies established requester service centers and public liaisons as required by the Executive Order. Besides these activities, certain agencies planned other types of outreach: for example, the Department of State reported taking steps to obtain feedback from the public on how to improve FOIA processes; the Department of the Interior plans to initiate feedback surveys on requesters’ FOIA experience; and the Department of Labor is planning to hold public forums and solicit suggestions from the requester community. Defense did not set specific goals and milestones in this area; according to Defense, it did not do so because its FOIA handbook had already been updated in the fall of 2005. Department officials told us that in meeting their goals and milestones for revising FOIA Web sites, they expect to improve awareness of Defense’s FOIA process, as well as improving public access and other objectives. As mentioned earlier, the Treasury did not address this area in its review or plan. However, Treasury has established Requester Service Centers and FOIA Public Liaisons, as required. The Treasury’s Director of Disclosure Services also told us that the Treasury provides on its Web site a FOIA handbook, a Privacy Act handbook, and a citizen’s guide for requesters. In addition, this official told us that the Treasury had updated its FOIA handbook in 2005 and conducted staff training based on the update. However, at the time of our review, the FOIA handbook on the Web site was a version dated January 2000. When we pointed out that this earlier version was posted, the official indicated that he would arrange for the most recent version to be posted. Because the Treasury did not review its efforts to increase public awareness, it missed an opportunity to discover that the handbook on the Web site was outdated and thus had reduced effectiveness as a tool to explain the agency’s FOIA processing to the public. Without further attention to increasing public awareness, the Treasury lacks assurance that it has taken all appropriate steps to ensure that the public has the means of understanding the agency’s FOIA processing. The annual FOIA reports continue to provide valuable information about citizens’ use of this important tool for obtaining information about government operation and decisions. The value of this information is enhanced when it can be used to reveal trends and support generalizations, but our ability to generalize about processing times—whether from agency to agency or year to year— is limited because only median times are reported. Given that processing times are an important gauge of government responsiveness to citizen inquiries, this limitation impedes the development of broader pictures of FOIA operations, which could be useful in monitoring efforts to improve processing and reduce the increasing backlog of requests, as intended by the Executive Order. Finally, having aggregated statistics and summaries could increase the value of the annual reporting process for assessing the performance of the FOIA program as a whole. In the draft report on which my statement today is based, we suggest that the Congress consider amending the act to require agencies to report additional statistics on processing time, which at a minimum should include average times and ranges. We also recommend that Justice provide aggregated statistics and summaries of the annual reports. The Executive Order provided a useful impetus for agencies to review their FOIA operations and ensure that they are appropriately responsive to the public generally and requesters specifically. Our draft report makes recommendations aimed at improving selected agency improvement plans. Nonetheless, all the plans show a commendable focus on making measurable improvements and form a reasonable basis for carrying out the order’s goals. In summary, increasing the requirements for annual reporting would further improve the public visibility of the government’s implementation of FOIA. In addition, implementing the improvement plans and reporting on their progress should serve to keep management attention on FOIA and its role in keeping citizens well informed about the operations of their government. However, to realize the goals of the Executive Order, it will be important for Justice and the agencies to continue to refine the improvement plans and monitor progress in their implementation. Mr. Chairman, this completes my statement. I would be happy to respond to any questions you or other Members of the Subcommittee may have at this time. If you should have questions about this testimony, please contact me at (202) 512-6240 or [email protected]. Other major contributors included Barbara Collier, Kelly Shaw, and Elizabeth Zhao. For the draft report on which this testimony is based, we gauged agencies’ progress in processing requests by analyzing the workload data (from fiscal year 2002 through 2005) included in the 25 agencies’ annual FOIA reports to assess trends in volume of requests received and processed, median processing times, and the number of pending cases. All agency workload data were self- reported in annual reports submitted to the Attorney General. To assess the reliability of the information contained in agency annual reports, we interviewed officials from selected agencies and assessed quality control processes agencies had in place. We selected 10 agencies to assess data reliability: the Departments of Agriculture (USDA), Defense, Education, the Interior, Labor, and Veterans Affairs, as well as the National Aeronautics and Space Administration, National Science Foundation, Small Business Administration, and Social Security Administration. We chose the Social Security Administration and Veterans Affairs because they processed a majority of the requests. To ensure that we selected agencies of varying size, we chose the remaining 8 agencies by ordering them according to the number of requests they received, from smallest to largest, and choosing every third agency. These 10 agencies account for 97 percent of the received requests that were reported in the 25 agencies’ annual reports. Of the 10 agencies that were assessed for data reliability, we determined that the data for USDA’s Farm Service Agency were not reliable; these data account for over 80 percent of the reported USDA data. We therefore eliminated USDA’s data from our analysis. Because of this elimination, our analysis was of 24 major agencies (herein we refer to this scope as governmentwide). Table 7 shows the 25 agencies and their reliability assessment status. To determine to what extent the agency improvement plans contain the elements emphasized by the order, we first analyzed the Executive Order to determine how it described the contents of the improvement plans. We determined that the order emphasized the following areas to be addressed by the plans: (1) reducing the backlog of FOIA requests, (2) increasing reliance on public dissemination of records (affirmative and proactive) including through Web sites, (3) improving communications with FOIA requesters about the status of their requests, and (4) increasing public awareness of FOIA processing including updating an agency’s FOIA Reference Guide. We also analyzed the improvement plans to determine if they contained specific outcome-oriented goals and timetables for each of the criteria. We then analyzed the 25 agencies’ (including USDA) plans to determine whether they contained goals and timetables for each of these four elements. We evaluated the versions of agency plans available as of December 15, 2006. We also reviewed the Executive Order itself, implementing guidance issued by OMB and the Department of Justice, other FOIA guidance issued by Justice, and our past work in this area. We conducted our review in accordance with generally accepted government auditing standards. We performed our work from May 2006 to February 2007 in Washington, D.C. Exemption number Matters that are exempt from FOIA (A) Specifically authorized under criteria established by an Executive Order to be kept secret in the interest of national defense or foreign policy and (B) are in fact properly classified pursuant to such Executive Order. Related solely to the internal personnel rules and practices of an agency. Specifically exempted from disclosure by statute (other than section 552b of this title), provided that such statute (A) requires that matters be withheld from the public in such a manner as to leave no discretion on the issue, or (B) establishes particular criteria for withholding or refers to particular types of matters to be withheld. Trade secrets and commercial or financial information obtained from a person and privileged or confidential. Inter-agency or intra-agency memorandums or letters which would not be available by law to a party other than an agency in litigation with the agency. Personnel and medical files and similar files the disclosure of which would constitute a clearly unwarranted invasion of personal privacy. Records or information compiled for law enforcement purposes, but only to the extent that the production of such law enforcement records or information could reasonably be expected to interfere with enforcement proceedings; would deprive a person of a right to a fair trial or impartial adjudication; could reasonably be expected to constitute an unwarranted invasion of personal privacy; could reasonably be expected to disclose the identity of a confidential source, including a State, local, or foreign agency or authority or any private institution which furnished information on a confidential basis, and, in the case of a record or information compiled by a criminal law enforcement authority in the course of a criminal investigation or by an agency conducting a lawful national security intelligence investigation, information furnished by confidential source; would disclose techniques and procedures for law enforcement investigations or prosecutions, or would disclose guidelines for law enforcement investigations or prosecutions if such disclosure could reasonably be expected to risk circumvention of the law; or could reasonably be expected to endanger the life or physical safety of an individual. Contained in or related to examination, operating, or condition reports prepared by, on behalf of, or for the use of an agency responsible for the regulation of supervision of financial institutions. Geological and geophysical information and data, including maps, concerning wells. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Freedom of Information Act (FOIA) establishes that federal agencies must provide the public with access to government information, enabling them to learn about government operations and decisions. To help ensure proper implementation, the act requires that agencies annually report specific information about their FOIA operations, such as numbers of requests received and processed and median processing times. In addition, a recent Executive Order directs agencies to develop plans to improve their FOIA operations, including decreasing backlogs. GAO was asked to testify on the results of its study on FOIA processing and agencies' improvement plans. The draft report on the study is currently out for comment at the agencies involved (and is thus subject to change). For the study, GAO reviewed status and trends of FOIA processing at 25 major agencies as reflected in annual reports, as well as the extent to which improvement plans contain the elements emphasized by the Executive Order. To do so, GAO analyzed the 25 agencies' annual reports and improvement plans. Based on data in annual reports from 2002 to 2005, the public continued to submit more requests for information from the federal government through FOIA. Despite increasing the numbers of requests processed, many agencies did not keep pace with the volume of requests that they received. As a result, the number of pending requests carried over from year to year has been steadily increasing. Agency reports also show great variations in the median times to process requests (less than 10 days for some agency components to more than 100 days at others). However, the ability to determine trends in processing times is limited by the form in which these times are reported: that is, in medians only, without averages (that is, arithmetical means) or ranges. Although medians have the advantage of providing representative numbers that are not skewed by a few outliers, it is not statistically possible to combine several medians to develop broader generalizations (as can be done with arithmetical means). This limitation on aggregating data impedes the development of broader pictures of FOIA operations, which could be useful in monitoring efforts to improve processing and reduce the increasing backlog of requests, as intended by the Executive Order. The improvement plans submitted by the 25 agencies mostly included goals and timetables addressing the four areas of improvement emphasized by the Executive Order: eliminating or reducing any backlog of FOIA requests; increasing reliance on dissemination of records that can be made available to the public without the need for a FOIA request, such as through posting on Web sites; improving communications with requesters about the status of their requests; and increasing public awareness of FOIA processing. Most of the plans (20 of 25) provided goals and timetables in all four areas; some agencies omitted goals in areas where they considered they were already strong. Although details of a few plans could be improved (for example, one agency did not explicitly address areas of improvement other than backlog), all the plans focus on making measurable improvements and form a reasonable basis for carrying out the goals of the Executive Order.
In 2015, FFP provided EFSP funding for cash transfer and food voucher projects in 30 countries (see fig. 1). To deliver assistance through these projects, USAID’s implementing partners employ a variety of mechanisms. These mechanisms include (1) distributing cash manually or electronically through accounts at banks or other financial institutions and (2) distributing paper or electronic food vouchers that entitle the holder to buy goods—typically, approved items from participating vendors—or services up to the voucher’s designated cash value. The value of the cash transfers and food vouchers is generally based on a formula that attempts to bridge the gap between the beneficiaries’ food needs and their capacity to meet those needs. Additionally, in fiscal years 2010 through 2015, USAID funded the following EFSP regional awards for cash transfer and food voucher projects: (1) the Syria Regional Award, in countries hosting Syrian refugees—Egypt, Iraq, Jordan, Lebanon, and Turkey—awarded each year since 2012; (2) the Central America Drought Award—for Guatemala, El Salvador, and Honduras—awarded in 2014; and (3) the Ebola Regional Response—for Guinea, Sierra Leone, and Liberia—awarded in 2015. In fiscal years 2010 through 2015, USAID awarded EFSP grants totaling about $3.27 billion, including $1.42 billion for cash transfer and food voucher projects. Awards for such projects grew from about $76 million in fiscal year 2010 to nearly $432 million in fiscal year 2015 (see fig. 2). In fiscal years 2010 through 2015, awards for cash transfers increased from about $42 million to approximately $119 million, while awards for food vouchers increased from about $34 million to approximately $312 million. During the same period, USAID awards of Title II funding for emergency food aid—primarily in-kind assistance—decreased from about $1.52 billion to approximately $1.07 billion. Since 2010, USAID’s APS for International Emergency Food Assistance has outlined agency requirements for EFSP cash-based food assistance proposals. The APS, which serves as a primary source of information for prospective applicants for EFSP grants, requires applicants to explain the rationale for the assistance modality they propose—cash transfer, food vouchers, LRP, in-kind food aid, or some combination of these modalities. Since 2011, the APS has required grant applicants to provide justification for a proposed project that includes cash-based assistance or in-kind food aid, in terms of the project’s timeliness, cost-effectiveness, or appropriateness. Timeliness. Since 2011, the APS has required grant applicants to explain whether in-kind food aid or prepositioned stocks can arrive in a sufficiently timely manner through the regular ordering process to address urgent or emergency needs. Cost-effectiveness. Since 2011, the APS has required grant applicants to provide cost-effectiveness information that affected their choice of modality. Further, according to the APS, in certain cases the cost of cash-based food assistance may be lower than that of in-kind assistance, while in other cases the difference may be negligible. Appropriateness. Since 2011, the APS has required grant applicants to explain why cash transfers or food vouchers, or both, may be more appropriate than in-kind food distributions. For example, potential beneficiaries may have physical access to functioning markets but lack sufficient purchasing power. (Fig. 3 shows other examples of reasons that EFSP grant applicants have cited for deeming cash transfer or food voucher projects to be most appropriate.) The 2015 APS states that, depending on market conditions, cash-based food assistance may be deemed more or less appropriate than in-kind food aid to address specific emergency food security needs. Since 2015, the APS has required applicants to justify their selection of either cash transfers or food vouchers as the preferred modality. USAID released a draft revision of the APS in May 2016 for public comment; according to USAID officials, the revised version will be finalized in the autumn of 2016. The draft APS requires grant applicants proposing EFSP projects to provide a justification that addresses how market appropriateness, feasibility, project objectives, and cost-efficiency influenced the modality selection. According to the draft APS, when addressing how feasibility influenced the selection of the assistance modality, applicants are to consider the time-sensitive nature of the emergency, including availability of rapid-response options such as prepositioned commodities or prenegotiated cash transfer or voucher response mechanisms. Monitoring and evaluation perform two separate but interrelated functions. Monitoring is the collection of data to determine whether projects are being implemented as intended and the tracking of progress with preselected indicators throughout the life of a project. Data collected through monitoring can be used by project managers to make incremental changes and adjustments to project implementation. Evaluations consist of ad hoc or periodic studies to assess whether and how projects achieved their expected goals. An evaluation may also identify outcomes that can be attributed to the project and may assess the project’s cost- effectiveness. Evaluations may rely on a range of quantitative and qualitative measures in addition to preselected indicators, comprehensive research designs, and appropriate statistical analysis of data, including data collected through monitoring activities. Results of evaluations can provide program managers with critical evidence for future program design. USAID’s APS contains monitoring and evaluation–related requirements for cash transfer and food voucher grant applications as well as minimum reporting requirements for data in projects’ final reports. Requirements for grant applications. Since 2010, the APS has required grant applications to include a monitoring and evaluation plan. In addition, the 2015 APS required that the monitoring and evaluation plan include a logical framework showing the causal linkages between activities, outputs, outcomes, and goals; identify assumptions and potential risks that are critical to the success of a project; and specify key indicators with proposed targets to track the project’s performance. In 2015, the APS also began requiring grant applicants to plan and budget for a baseline survey and a final evaluation survey for projects that propose an implementation period of greater than 12 months. Under the 2015 APS, such projects that include activities to influence beneficiary behaviors must also include at least one high-level food security or nutrition outcome indicator for each project purpose. Minimum reporting requirements for project final reports. Since 2010, the APS has required implementing partners to submit final reports containing monitoring data that meet minimum programmatic reporting requirements. The reports are to include data on the number of beneficiaries targeted and reached; the cost per beneficiary; retail price information on key staples (before, during, and after project implementation); and the actual number and value of cash transfers or food vouchers distributed and redeemed by beneficiaries. The reports are also to include information about beneficiaries’ use of resources provided through cash transfer projects. In addition, since 2011, the APS has required that final reports include the time from donor-signed agreement to first distribution; a description of how the program addressed gender needs; the planned number and value of cash transfers or food vouchers distributed to, and redeemed by, beneficiaries; and information on the types and quantities of commodities they procured with food vouchers. Further, in 2015, USAID began requiring final reports to include the average cost per beneficiary per month for each modality as well as learning on the appropriateness of selected modalities. USAID has established a process for monitoring implementation of EFSP cash transfer and food voucher projects by assigning monitoring roles and responsibilities to headquarters and in-country mission staff, developing country monitoring plans, and developing tools to assist its field staff. USAID’s process includes actions such as visiting distribution and project sites, speaking with beneficiaries and retailers, and meeting regularly with partners’ in-country staff. Implementing partners have established processes to monitor cash and voucher projects during and after distributions of assistance. To ensure that assistance is delivered according to their procedures and to the targeted beneficiaries, implementing partners monitor distributions and interview beneficiaries about the distribution process. In addition, implementing partners conduct postdistribution surveys to gather information about the relevance, efficiency, and effectiveness of the assistance. USAID has assigned monitoring roles and responsibilities to headquarters and in-country mission staff. According to USAID officials, FFP grant officers, in headquarters, are primarily responsible for reviewing implementing partners’ quarterly and final reports. The grant officers are additionally responsible for meeting with implementing partner headquarter officials to discuss project progress and performance. Also, FFP’s Monitoring and Evaluation Team is responsible for developing monitoring tools, standards, guidance, implementing partner reporting requirements, and providing training for FFP officers and implementing partners who are responsible for monitoring cash and voucher projects. The FFP field officers are primarily responsible for verifying information provided by implementing partners, communicating regularly with implementing partners in country, and providing grant officers with information on project progress and performance. The seven FFP field officers we interviewed told us that they conduct various monitoring activities to verify information that partners provide, such as visiting distribution and project sites, completing site visit reports, speaking with beneficiaries and retailers, and meeting regularly with partners’ in-country staff. In addition, the FFP field officers said they discuss the results of their monitoring efforts and site visit reports with the grant officers on a regular basis. To aid FFP field officers in conducting site visits, the FFP Monitoring and Evaluation Team, in 2015, developed a monitoring tool that includes sample questions for FFP officers to consider when making site visit observations and to ask respondents during monitoring visits. The tool is organized by type of respondent (i.e., implementing partner staff, beneficiaries, service providers, and retailers or market vendors), and the information collected is used to complete trip reports. According to FFP officials, as of December 2015, the Monitoring and Evaluation Team had shared the tool with every USAID field office. To ensure regular monitoring of all FFP programs in a country, the FFP Monitoring and Evaluation Team began training FFP field officers in developing country monitoring plans in 2014, according to USAID officials. Agency officials said the country monitoring plans use a risk- based approach to prioritize the monitoring of country projects across the FFP portfolio (including EFSP and Title II projects), to establish the number of site visits per month and year, to determine which monitoring activities to conduct, and to allocate staff resources. In addition, USAID officials said they plan to institute a requirement that FFP officers in missions with FFP programs must complete country monitoring plans and report periodically on progress in implementing objectives identified in the country monitoring plan. As of June 2016, USAID had developed monitoring plans for 26 of 28 countries. According to the FFP officers we spoke with, country monitoring plans have helped them prioritize monitoring site visits for both Title II and EFSP projects despite challenges such as limited embassy resources (e.g., motor-pool availability and staffing shortfalls), adverse operating conditions (e.g., road closures or weather), and security concerns. According to USAID officials, to mitigate security-related constraints, FFP has awarded contracts to third-party monitors in countries where USAID has limited access. For example, for Somalia, USAID contracted with a third-party monitor to verify project activities and conduct post-distribution surveys on a monthly basis for 20 percent of the project sites. According to a representative of the monitor, it uses a risk-based approach to monitoring, prioritizing site visits and rotating the sites visited on a monthly basis. To monitor projects and collect required information, implementing partners have established processes to monitor activities during and after distributions of EFSP cash transfer and food voucher projects. Distribution monitoring. According to implementing partner representatives for projects in Kenya, Liberia, and Somalia, their monitoring processes include observing distribution processes to ensure that the cash transfers and food vouchers are delivered according to standard procedures, for the stated purposes, and to targeted beneficiaries. The representatives said that during distribution, monitors collect information on the actual number of beneficiaries served, the actual transfer value provided, the timing of the distribution, and beneficiary participation by gender and age. They also collect information about the distribution site and other aspects of the distribution process, such as queue management, waiting time, beneficiary verifications, access, security, and safety. In addition, implementing partner monitors may speak with beneficiaries about their satisfaction with the distribution process. Implementing partners also monitor financial service providers to ensure that their implementation of the cash transfer is according to plan. In addition, implementing partner representatives monitor the voucher process at the vendor or retailers’ level. Figure 4 shows implementing partner representatives conducting distribution monitoring for a cash transfer project in Liberia. Postdistribution monitoring. According to implementing partner representatives for programs in Kenya, Liberia, and Somalia, their processes include surveying beneficiaries and non-beneficiaries at their residences after the assistance is distributed. According to the implementing partners, postdistribution surveys are one of the main information sources for assessing the relevance, efficiency, and effectiveness of the assistance provided. During the surveys, implementing partner representatives gather information about beneficiary households, including, among others, income, expenditures, food security outcomes, coping strategies, perceived problems with the assistance, and modality preferences. In addition, implementing partner representatives said their processes also include monitoring markets after distributions by sampling traders, shops, and markets for changes in commodity prices. Further, for food voucher projects, implementing partner representatives for the project in Somalia said that they also monitor vendors or retailers to collect information on food voucher use and availability of commodities, accuracy of commodity price displays, and to determine if the food vouchers are used for intended purposes. We observed implementing partner representatives using technology and tools that may enhance the collection and analysis of monitoring data. For example, implementing partner representatives for the project in Somalia, based in Nairobi, Kenya, were using beneficiary management systems to register beneficiaries and were remotely monitoring beneficiary purchases and redemptions, and vendor sales and activities, for an EFSP food voucher project in Somalia. In Kenya and Liberia, we observed representatives of implementing partners for projects in those countries using mobile devices with standardized forms to conduct on-site distribution monitoring, postdistribution monitoring, and market monitoring surveys (see fig. 5). Implementing partner representatives showed us the tablets and standardized forms they use for collecting monitoring information for their projects in Somalia. According to the implementing partners, using this technology enables rapid aggregation and analysis of the data collected. We also observed implementing partner representatives conducting remote phone-based surveys with beneficiaries for projects in Kenya and Somalia. The survey included a short series of questions on household food consumption and coping strategies. Further, the implementing partners established hotlines in Kenya and Somalia for beneficiaries to provide feedback, including complaints, about project implementation and the assistance provided. Incomplete reporting of data that USAID requires, as well as weaknesses in USAID’s performance indicators, limits the agency’s ability to evaluate the projects’ performance. Our review of the final reports for 14 cash transfer and food voucher projects found that most of the reports lacked some of the required data that are necessary for performance evaluation. In addition, the indicators that USAID uses to assess projects’ timeliness, cost-effectiveness, and appropriateness—three criteria that it considers in approving grant applications—have weaknesses. For example, USAID’s indicator for timeliness does not capture delays in actual implementation against what was planned. Moreover, USAID’s indicator for cost- effectiveness—cost per beneficiary—does not produce data that can be compared across projects or modalities, because it does not include a standardized unit for measuring costs. Additionally, USAID has not set a benchmark for assessing market appropriateness based on market prices. These limitations affect USAID’s ability not only to evaluate the overall performance of cash transfer and food voucher programs but also to learn from experience and make informed decisions on future projects. Our review of the final reports submitted for the 14 cash transfer and food voucher projects found that most of the reports lacked some data about the projects’ performance that USAID’s APS required. According to USAID officials, implementing partners may have communicated the missing information through other means, such as during meetings or in e-mails or quarterly reports; however, the 2013 APS, to which the projects were subject, required partners to submit the information in their projects’ final report. The APS required the final report for each project to include the number of beneficiaries targeted and reached, disaggregated by sex and age, and to verify that the program assessed and addressed gender needs and issues. In addition, the APS required the final report to list the planned and actual number and value of food vouchers or cash transfers that implementing partners distributed and beneficiaries redeemed. Further, the APS required the final report to include information on how the beneficiaries used cash transfers as well as information on the types and quantities of commodities that beneficiaries procured with food vouchers. Finally, the APS required the final report to include data that the agency will use to measure projects’ overall performance in terms of timeliness, cost-effectiveness, and appropriateness: the time from donor- signed agreement to first distribution to beneficiaries; the cost per beneficiary; and retail price information on key staples in the area of the program before, during, and after the distribution. While most of the 14 final reports that we reviewed included most of the required data on project beneficiaries, only 1 report, for a food voucher project in Sudan, included all 12 data elements required by USAID; the other reports lacked up to 8 of the required elements. Figure 6 shows the required data elements that were included in the 14 final reports that we reviewed. Data on beneficiaries. All 14 final reports that we reviewed included data on the number of beneficiaries reached, although several reports were missing other required data about the beneficiaries who received the cash transfers or food vouchers. The reports indicated that cash transfer or food voucher projects reached almost 926,000 beneficiaries in Burundi, Chad, the DRC, Haiti, the Philippines, Somalia, Sudan, Yemen, and Zimbabwe. Two reports did not list the number of beneficiaries targeted by the project. In the 12 reports that listed these data, our analysis found that 9 of the 12 projects reached or exceeded their target numbers. However, 4 reports did not enumerate beneficiaries by age range and 4 reports did not describe how gender needs were assessed and addressed, as USAID requires. Data on assistance distributed and redeemed. Half of the 14 reports we reviewed did not list the planned value of cash transfers or the number of food vouchers distributed as the APS requires. Only 5 reports—for projects in Chad, DRC, Somalia, Sudan, and Zimbabwe—included all required information about the actual number and value of cash transfers or food vouchers distributed and redeemed. Further, of the 4 final reports for cash transfer projects, only 1 report included the required information about beneficiaries’ use of the transferred cash. Of the 8 final reports for food voucher projects, only 4 listed, as required, the types and quantities of commodities that beneficiaries procured with their vouchers. Only 1 of 2 projects that comprised both cash transfer and food vouchers included the required information on the beneficiaries’ use of the cash and types and on quantities of commodities procured with the vouchers. Data on timeliness, cost-effectiveness, and appropriateness. Most of the 14 reports we reviewed were missing required data that USAID uses to analyze the timeliness, cost-effectiveness, and appropriateness of cash transfer or food voucher projects. Timeliness. Twelve of the 14 reports did not list the number of days from the award agreement date to the cash transfer or food voucher distribution date. In addition, 5 of the 14 reports did not list the first date of distribution, making it difficult to determine the time from the partner’s signing of the agreement with USAID to the first distribution of the cash transfers or food vouchers. We requested any data that implementing partners had submitted through other documents, such as e-mails or quarterly reports; however, USAID did not provide such data. Cost-effectiveness. Six of the 14 final reports did not list a cost per beneficiary as required by USAID. According to a country officer, the final report for one project excluded the cost per beneficiary because the project, which was jointly funded by FFP and the Office of U.S. Foreign Disaster Assistance (OFDA), used an OFDA award mechanism that does not require this calculation. Appropriateness. Twelve of the 14 final reports did not list retail price information on key staples in the area of the program before, during, and after the cash transfer or food voucher distribution as required by USAID. Some of the 12 reports listed commodity prices during distributions but did not list the prices 2 weeks before the program began or 2 weeks after it ended. According to USAID, country officers do not request additional price data from partners if there is no evidence of projects’ having a negative impact on the local market. USAID’s indicators for measuring cash transfer and food voucher projects’ timeliness, cost-effectiveness, and appropriateness have weaknesses that limit the agency’s ability to evaluate these aspects of projects’ performance. USAID’s indicator for timeliness produces data that show how quickly the distribution occurs after the award agreement is signed; however, the indicator does not compare actual and planned distribution schedules, which would identify delays in project implementation. Moreover, USAID’s indicator for cost-effectiveness does not produce data that can be compared across projects or modalities, because it does not include a standardized unit for measuring costs. Further, USAID does not have a benchmark for the market appropriateness indicators that measure the impact of cash or voucher assistance on local markets. According to federal standards for internal control, management should use quality information, including relevant data from reliable sources, to achieve an agency’s objectives. As a result of the weaknesses in these indicators, USAID and its implementing partners have limited ability to fully evaluate cash transfer and food voucher projects’ timeliness, cost-effectiveness, and appropriateness and may miss opportunities to learn from past experiences to improve the program. The indicator that USAID and its implementing partners use to assess cash transfer and food voucher projects’ timeliness provides a measure of how quickly assistance is delivered after an award is signed, but it does not systematically capture any delays in project implementation. Since 2011, USAID’s APS has required that the final report for each cash transfer and food voucher project include, as an indicator of the project’s timeliness, the number of days between grant approval and distribution of assistance. According to USAID officials, the agency uses this measure to determine how quickly assistance can be provided under different modalities. USAID also told us that it considers the duration of time between award and first distribution, based on the proposed project timeline, as a general indicator of implementing partners’ initial progress in starting a project. USAID officials noted that anecdotal evidence suggests that the number of days between grant approval and first distribution of assistance has generally decreased for cash transfer and food voucher projects. The number of days between grant approval and first distribution of assistance does not register delays in project implementation—an important aspect of timeliness, since delays in implementing cash and voucher projects can have a severe impact on beneficiaries who rely on the assistance for their livelihood. USAID does not systematically collect data that would show cash transfer and food voucher project delays, such as the planned dates of distribution for comparison with the actual dates. Since 2011, the APS has not required partners to list in their final reports the planned date of first distribution, which appears in the implementing partner’s award proposal. Delays have been noted for many EFSP projects. For example, a recent USAID Inspector General report compared the planned and actual distribution dates for four cash transfer and food voucher projects in West Africa during the Ebola response and concluded that delays in distribution—resulting from delays in award approvals and challenges in staff recruitment and coordination—averaged 3 months. We requested and obtained the planned first-distribution dates for the 14 cash transfer and food voucher projects we reviewed and then compared the planned dates with the actual dates listed in the projects’ final reports. We found that for 7 of the 14 projects, first distribution of assistance was delayed by an average of 2 months. The indicator that USAID uses to measure the cost-effectiveness of cash transfer and food voucher projects does not include a standardized unit for measuring cost. As a result, USAID is unable to use the cost data it collects to assess the relative cost-effectiveness of its EFSP cash transfer and food voucher projects or to compare the cost-effectiveness of such projects with that of projects that used other modalities. Since 2011, USAID’s APS has required that the final report for each cash or voucher project include, as an indicator of cost-effectiveness, the cost per beneficiary. This cost is calculated by dividing the aggregate cost of providing assistance by the actual number of beneficiaries of a cash transfer or food voucher project. We found that in the 8 final reports that listed the cash transfer or food voucher project’s cost per beneficiary—USAID’s indicator for cost- effectiveness—the reported costs could not be used to compare the projects’ cost-effectiveness because the cost units were not standardized. In these 8 reports, the reported costs ranged widely, from $10 to $219 per beneficiary. According to USAID, per-beneficiary costs may range widely for a number of reasons, including variance in the size of the transfers, the frequency of the transfers, the presence of complementary services, the overall scale of the projects, and overhead costs for each project. For example, given economies of scale, a project that prints vouchers for 50,000 beneficiaries may have lower overhead costs than a project that prints vouchers for 10,000. Also, indirect costs, such as for security or transportation, may be higher for projects that require additional security or are located in remote, less accessible districts. In addition, according to an implementing partner responsible for a cash transfer project and a food voucher project in Liberia, its cost-per-beneficiary calculation for the voucher project did not include indirect costs because they were included in its calculation for the cash transfer project. USAID officials acknowledged that without a standardized cost unit, partners have applied different methods to calculate the cost per beneficiary for cash transfer and food voucher projects. To weigh the cost-effectiveness of various food aid modalities during project design, one of USAID’s implementing partners, the United Nations World Food Programme (WFP), uses a method that compares different modalities’ costs for delivering the same numbers of calories and for delivering the same nutritional values. Using this method—known as the Omega tool analysis—to compare potential costs, WFP determined that in Burkina Faso and Niger, food transfers could deliver the same nutritional value at a lower cost than cash transfers or food vouchers. However, WFP’s analysis show that in both cases, combining modalities could deliver the same nutritional value at a lower cost than using a single modality. In Senegal, WFP determined that food vouchers or a combination of food transfers and cash transfers or food vouchers could deliver the same nutritional value at a lower cost than food transfers. According to WFP officials, as of July 2016, WFP had used this tool while designing projects in 24 countries and plans to rely on this tool to assess cost-efficiency and cost-effectiveness. WFP officials also noted that WFP’s country offices planned to use this tool to validate cost-efficiency assumptions at the close of programs. According to a USAID official, the agency is considering revising the APS to require grant applicants to use the cost of a food ration for cash-based and in-kind food aid projects when estimating potential cost-effectiveness. The official indicated that the change is intended to standardize methods for justifying proposed projects on the basis of cost-effectiveness. The draft version of the APS that USAID released for public comment in May 2016 provides more explicit parameters for estimating cost-effectiveness that could enable the agency to compare proposed modalities during the application process. USAID has not required implementing partners to establish benchmarks for measuring cash transfer and food voucher projects’ impact on prices in local markets—an indicator it uses to measure the projects’ appropriateness. Since 2010, the APS has required that implementing partners’ final reports include retail price information on key staples in the area around the project before the project begins, monthly during the project, and after the project ends. However, USAID has not required the partners to establish any associated benchmarks for this indicator and as a result, USAID and implementing partners may not be able to assess whether market price fluctuations are within acceptable ranges. Implementing partners are using different thresholds to assess market impact. For example, WFP’s policy is to reassess the value of the assistance provided if prices for staple foods fluctuate by 10 percent or more. According to WFP officials, WFP may also change the assistance modality as a result of price fluctuations. Representatives of other implementing partners told us that they reassess project implementation when prices fluctuate by 20 percent or more. Implementing partner representatives also stated that, because of a lack of guidance on this topic, they were unsure of how USAID assesses the market information they submit. As of the end of June 2016, USAID’s draft revisions to the APS included a reference to market monitoring guidance for implementing partners. According to this guidance, implementing partners are to set price thresholds as a basis for determining when price changes must be investigated and explained. The guidance also states that when prices increase or decrease beyond this set parameter, the change should be flagged and the cause should be investigated. The draft revisions to the APS also require partners to include market analysis information and any significant changes in their quarterly and final project reports. The draft revisions to the APS do not explicitly require partners to report the price thresholds in their final reports. Our review of 14 rigorous studies of the relative impacts of cash transfers, food vouchers, and food transfers found the studies demonstrate that all three modalities can improve food security outcomes for people facing food emergencies. The modalities’ impacts on food security outcomes varied by study, with no modality consistently outperforming the others. Contextual factors, such as the severity of the food crisis and the capacity of local markets, may have contributed to this variation. Studies that compared the relative costs of the three modalities generally reported that cash transfers were least expensive, although most of the studies did not account for the full costs associated with the modalities. Among the studies we reviewed, those comparing recipients of cash transfers, food vouchers, and food transfers with control groups of individuals who received no assistance demonstrated that all three modalities can lead to significant improvements in dietary quantity and quality. For example, a study in Ecuador found that, compared with the control group, all recipients of cash transfers, food vouchers, and food transfers experienced significantly improved outcomes in terms of the value and volume of food consumed, caloric intake, and dietary diversity. Similarly, a study in Bangladesh found that all recipients of cash transfers, food transfers, or combinations of the two modalities experienced significantly improved outcomes for value and volume of food and for caloric intake. Eight of the 14 studies we reviewed included control groups, which allowed us to compare food security outcomes for recipients of cash transfers, food vouchers, or food transfers with outcomes for groups of individuals that did not receive any type of assistance. Among 6 studies that used control groups to examine the value and volume of food provided, 5 studies showed statistically significant improvements as a result of receiving all three types of assistance. Similarly, 3 of 6 studies that used control groups to examine caloric intake also found statistically significant improvements for all three modalities. The studies showed that all three assistance modalities generally led to significantly improved outcomes for the most frequently assessed outcomes in the studies—value and volume of food consumed, caloric intake, and dietary quality. However, the results for some other outcomes were mixed. In particular, 3 of the 4 studies that considered nutritional status showed some modalities leading to significant improvements, while the fourth study showed no modality leading to significant improvements. Our analysis of the 14 studies comparing the impacts of cash transfers, food vouchers, and food transfers on food security outcomes found that the modalities’ performance varied by study and project. Moreover, none of the three modalities consistently outperformed the others. Contextual differences among the projects may have contributed to the variations in the modalities’ impacts. (For detailed results of our literature review, see app. II.) Our review of the 14 studies found that the impacts of the modalities on food security outcomes varied by study. For example, the study of the project in Yemen reported that recipients of the cash transfers bought a wider range and higher value of food items, showing that cash transfers provided significantly greater improvements in dietary quality than did food transfers. The Yemen study also found that food transfers provided higher levels of caloric intake than did cash transfers, which the study’s authors attributed to the relatively inexpensive staples, such as wheat and oil, included in the food transfers. In contrast, a study of a project in Niger found that food transfers resulted in significantly greater improvements in dietary quality than did cash transfers, indicating that, in this instance, food transfers provided a more varied and higher-quality diet. This finding was attributed to the fact that cash recipients in Niger bought significantly cheaper bulk grains with their transfers than food recipients. The study authors determined that the cash beneficiaries purchased these cheap bulk grains in anticipation of seasonal price increases—essentially stocking up on supplies for the “hungry” season. As a result, the food recipients, who relied on the food transfers provided by the project, had a more varied and higher-quality diet. In some instances, a modality significantly outperformed one or both of the other modalities for a particular outcome in some countries but not in others. For example, studies of projects in Niger, Sri Lanka, Uganda, and Yemen showed that cash transfers had a greater impact on the value and volume of food than food transfers had, but studies of projects in Bangladesh, the DRC, Ecuador, and Mexico showed that the three modalities had comparable impacts on this outcome. Moreover, a study of a project in Niger comparing the impacts of the three modalities found that food transfers provided the greatest dietary diversity, while studies of projects in Malawi, Uganda, and Yemen found that cash transfers provided greater dietary diversity than food transfers. The DRC and Ecuador studies did not find that any modality had a consistently and significantly greater impact on dietary diversity. In 11 of the 14 studies, some of the differences in modalities’ impacts on food security outcomes were not statistically significant and therefore may have resulted by chance. For example, the study of the Ecuador project found that cash transfers, food vouchers, and food transfers all led to improvements in beneficiaries’ dietary diversity. While statistically significant compared with the control group, the improvements that resulted from each of the three modalities were not statistically different from each other across the metrics studied. Similarly, a study of a project in Mexico noted that both cash transfers and food transfers increased the value and volume of food consumed, and levels of nutrients, with statistically insignificant differences in the two modalities’ impacts. While we could not determine precisely why the three modalities’ impacts varied in the studies we reviewed, contextual differences in the projects the studies examined may help explain such variation. According to researchers we spoke with, the effectiveness of cash transfers, food vouchers, and food transfers are heavily influenced by contextual factors such as the severity of the food crisis, beneficiaries’ specific needs when projects began, changes in market prices, and the projects’ designs. While each of the 14 studies we reviewed controlled for multiple factors when comparing the modalities’ impacts on food security outcomes, we noted numerous contextual differences among the evaluated projects that may have contributed to variation in these impacts. The projects’ purposes and goals ranged from providing disaster relief to responding to seasonal food emergencies, aiding displaced persons, addressing long- standing emergencies, or helping achieve development objectives. In addition, the projects varied in factors such as the value and frequency of the transfers and whether the assistance was conditional or unconditional. For example, a study in Sri Lanka examined a 3-month project that took place in tsunami-affected regions of the country. The study found that cash transfers had a greater impact than food transfers on dietary quantity. However, its findings are difficult to apply to many other situations because of the post-tsunami conditions that prevailed, the relatively short duration of the emergency relief project, and some unevenness in the frequency of food transfers relative to cash transfers. Changes in markets and prices can affect the modalities’ relative impacts on food security outcomes. For example, a study of a project in Ethiopia found that both food transfers and a mixture of food and cash transfers significantly outperformed cash-only transfers in reducing the periods of time when beneficiaries experienced food shortages. However, the study’s authors noted that the project took place during a period of high food-price inflation that considerably reduced the value of the cash transfers compared with that of the food transfers. Moreover, differences in projects’ design—specifically, whether the project provided any assistance in addition to the food or cash transfers— may have contributed to variation in the modalities’ impacts. For example, a study of a project in Bangladesh concluded that aspects of project design, and in particular the use of complementary programs designed to achieve project outcomes, were associated with the greatest impacts. This study found that modalities combined with complementary programs that provided guidance and training on nutrition significantly outperformed cash transfers, food vouchers, and food transfers that were not combined with complementary programs. According to the study’s authors, these results help demonstrate that food security outcomes can be improved by including complementary programs designed to achieve those objectives and that these improvements occur when such programs are combined with any of the three modalities. Among the 14 studies we reviewed, 11 considered the relative costs of cash transfers, food vouchers, and food transfers. Seven of these 11 studies reported that cash transfers were least expensive, while 1 study reported that food transfers were least expensive. Of the 3 remaining studies that considered the modalities’ relative costs, 1 study did not identify the least expensive modality and the other 2 studies had mixed results, depending on the method used to estimate costs. The studies considered costs for a range of activities related to delivering the assistance, such as administration, staffing, banking for cash transfers, production for food vouchers, and storage and transportation for food transfers. However, only 3 of the 11 studies considered not only the activity costs related to delivering the assistance but also the costs of purchasing food. The three studies that considered the costs of purchasing food reached varying conclusions about the modalities’ overall relative costs. One study, of a project in Niger, found that food transfers were least expensive overall. Another study, of a project in Malawi, reported mixed results, finding that cash was more cost-effective but food transfers were more cost-efficient. The third study, of a project in Bangladesh, did not clearly identify the least expensive modality. According to researchers, food transfers may be the least expensive modality overall if market conditions allow implementing partners to purchase food at prices low enough to offset the costs of delivering it, since these costs are usually higher than the costs of delivering cash transfers or food vouchers. For example, in some situations, implementing partners’ savings from purchasing large amounts of food wholesale—that is, for considerably lower prices than beneficiaries would pay in their local markets—could offset the higher costs of delivering the food. With USAID’s additional flexibility to choose among modalities of food assistance, the ability to demonstrate the performance of each modality is of increasing importance. Monitoring and evaluation are essential to assessing and demonstrating, with timely and credible evidence, the effectiveness of the various modalities employed to deliver assistance, including cash transfers and food vouchers. USAID and its implementing partners have established processes to monitor EFSP cash transfer and food voucher projects. Specifically, USAID has assigned roles and responsibilities and developed monitoring plans and tools to aid FFP field officers who conduct site-visits and meet with implementing partners. Further, implementing partners have developed processes to monitor the effectiveness of their projects through distribution and postdistribution monitoring. However, the incompleteness of required information about project performance in partners’ final reports limits USAID’s ability to assess whether EFSP cash and voucher projects met their performance goals. In addition, weaknesses in USAID’s indicators for measuring EFSP cash transfer and food voucher projects’ timeliness, cost, and appropriateness—criteria that USAID considers in approving the projects—limit the extent to which reported data can demonstrate the effectiveness of such projects and be used to evaluate the performance of cash and vouchers relative to in-kind food aid. The 14 studies we reviewed showed that cash transfers, food vouchers, and food transfers can all improve food security. At the same time, the studies’ findings of variation in the modalities’ impact on food security suggest that when selecting modalities for emergency food assistance, USAID and other donors should carefully consider contextual factors that could influence project outcomes. By taking steps to strengthen monitoring and evaluation of its cash transfer and food voucher projects, USAID will ensure access to information about each implemented project that it can use in planning future projects—including selecting the appropriate modality—and will help USAID optimize its efforts to respond to continuing food emergencies around the world. To strengthen USAID’s monitoring and evaluation of cash transfer and food voucher projects and help ensure improved program oversight of these projects, we recommend that the USAID Administrator take the following two actions: Take steps to ensure that final reports submitted for cash transfer and food voucher projects comply with USAID’s minimum data requirements. Strengthen the indicators USAID uses to measure the timeliness, cost-effectiveness, and appropriateness of EFSP cash transfer and food voucher projects. We provided a draft of this report to USAID, which provided both written and technical comments. In its written comments, reproduced in appendix IV, USAID agreed with our findings and recommendations. Regarding our first recommendation, USAID agreed to standardize data collected in final reporting by soliciting and hiring program support officers for each of its geographic teams. Regarding our second recommendation, USAID agreed to improve indicators to ensure evaluation and comparison across its emergency food assistance portfolio by updating these indictors in its forthcoming Annual Program Statement for International Emergency Food Assistance. We incorporated USAID’s technical comments as appropriate. We are sending copies of this report to the appropriate congressional committees; the Administrator of USAID; and the Secretary of State. The report is also available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9601 or [email protected]. Contact points for our Offices of Congressional Relations and of Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. In this report, we (1) examine the U.S. Agency for International Development’s (USAID) and implementing partners’ processes for monitoring cash transfer and food voucher projects; (2) analyze the extent to which monitoring data that partners reported to USAID can be used to evaluate the performance of such projects; and (3) review studies that examined the relative impacts of cash transfers, food vouchers, and food transfers on food security outcomes and the relative cost of these modalities. To address these objectives, we analyzed data and reviewed program documents provided by USAID and its implementing partners for Emergency Food Security Program (EFSP) cash transfer and food voucher projects, including the United Nations World Food Programme (WFP) and nongovernmental organizations (NGOs). We met in Washington, D.C., with USAID officials and with implementing partner officials representing NGOs that received USAID EFSP grants, and we met in Rome, Italy, with officials from WFP and the U.S. Mission to the UN. In addition, we visited Kenya and Liberia, where we reviewed projects that were under way in Kenya, Liberia, and Somalia. We selected these projects using criteria that included the amount of EFSP funding, the type of project (cash transfer or food voucher), and the implementing partner. While in Kenya and Liberia, we met with USAID officials from the U.S. missions; representatives of implementing partners, vendors, and financial institutions; and project beneficiaries, among others. To examine USAID’s and implementing partners’ processes for monitoring EFSP cash and voucher projects, we reviewed activities that USAID and implementing partners, including WFP and NGOs, undertook. We also reviewed relevant program documents that they provided, such as award agreements, project quarterly and final reports, market monitoring tools, site visit reports, and results of distribution surveys. We interviewed USAID officials in Washington, D.C., including award agreement officers, country backstop officers, and members of FFP’s monitoring and evaluation team, among others. We also interviewed FFP regional monitoring officers located in Dakar, Senegal, and in Nairobi, Kenya, and we interviewed FFP field officers responsible for monitoring cash and voucher projects in eight countries. In addition, we met in Washington, D.C., with implementing partner officials representing NGOs that were awarded USAID EFSP grants and in Rome, Italy, with officials from WFP and the U.S. Mission to the UN. Further, we visited Kenya and Liberia, where we reviewed EFSP-funded cash and voucher projects under way in Kenya, Liberia, and Somalia. In Kenya and Liberia, we also visited project sites, observed implementing partners conducting monitoring activities, and met with beneficiaries of projects in those countries. To examine the extent to which reported data can be used to evaluate the performance of cash transfer and food voucher projects, we reviewed USAID’s 2013 Annual Program Statement (APS) for International Emergency Food Assistance and grant agreements for final reporting requirements. In addition, we reviewed 14 final reports that implementing partners submitted to USAID for, respectively, four cash transfer projects, eight food voucher projects, and two projects with both cash transfer and food voucher components. We selected these projects using the following criteria, intended to ensure a diverse sample of implementing partners: (1) the funding was awarded for NGO projects in fiscal years 2013 and 2014 or for WFP projects in fiscal year 2014, all of which were subject to USAID’s 2013 APS; (2) the award was at least $2 million for NGO projects or at least $3 million for WFP projects; and (3) the projects closed by October 31, 2015. Of the 36 NGO projects funded in fiscal years 2013 and 2014, 22 received awards of at least $2 million; of these, only 11 were closed by October 31, 2015, and therefore were included in our review. Of the 18 WFP projects funded in fiscal year 2014, 7 received awards of at least $3 million; of these 7 projects, only 3 were closed by October 31, 2015, and therefore were included in our review. Because the 14 projects represent a nonprobability sample, our findings may not be generalizable to all USAID EFSP programs. We reviewed the APS and grant agreements to identify minimum programmatic reporting requirements, and we reviewed the final reports for the 14 selected projects for data responding to these requirements. In addition, we reviewed the 14 final reports for data and the indicators that USAID uses to measure cash transfer and food voucher projects’ timeliness, cost- effectiveness, and appropriateness—namely, time from award to first distribution, cost per beneficiary, and food price data. We met with USAID officials in Washington, D.C., to discuss APS requirements, implementing partner reports, and the quality of indicators. To review studies’ conclusions about the relative impacts of cash transfers, food vouchers, and food transfers on food security and the modalities’ relative costs, we took the following steps: To identify relevant studies, we (1) considered prior reviews of rigorous evaluations related to cash-based humanitarian assistance and food security; (2) conducted our own search of literature using appropriate terms; (3) and asked six researchers who had conducted prior reviews of rigorous evaluations whether they knew of any additional evaluations. We screened the studies identified by these sources to determine whether any of the studies met the following criteria: (1) compared cash transfers, food vouchers, or food transfers with at least one other modality; (2) evaluated at least one food security outcome; and (3) considered food security outcomes by means of randomized control trials or groups, carefully selected comparison groups, or a quasi-experimental design that used statistical techniques to make precise comparisons. This process resulted in our selecting 14 rigorous studies, made public since 2006, that evaluated whether cash transfers, food vouchers, or food transfers were more successful in achieving intended impacts on food security outcomes in situations where at least two modalities were implemented and could be compared. These 14 studies examined projects in 10 countries: Bangladesh, the Democratic Republic of the Congo, Ecuador, Ethiopia, Malawi, Mexico, Niger, Sri Lanka, Yemen, and Uganda. (See app. II for a list of these studies.) To review the 14 studies we selected, we used a data collection instrument (DCI) designed to examine the studies’ design, quality, and major findings. (App. II lists the studies we reviewed and provides a detailed summary of the results of our review.) We developed the following eight key food security outcomes based on our analysis of the metrics in the 14 selected studies and on our discussions with researchers with relevant expertise: value and volume of food, caloric intake, dietary diversity, food consumption score, nutrient number and levels, caloric value, experiential measures, and nutritional status. We reviewed each study’s results to determine whether any modality demonstrated a statistically significant improvement in one or more of these outcomes compared with the other modality, or modalities, that the study examined. Two independent analysts used the DCI to review the studies, discussing and reconciling any differences in their initial assessments. In some instances, studies presented the results for several metrics for the same outcome, such as a household dietary diversity score and a household dietary diversity index. In those instances, our decision rule was that one modality had to demonstrate a statistically significant improvement over all other modalities examined for all metrics that the outcome comprised. If this criterion was not met, we determined that the modalities’ impacts were comparable. We did not consider results for subpopulations, such as by region, income group, gender, or age range, except when results for a particular outcome were reported only for a subpopulation. We also did not consider impacts on factors that did not qualify as costs or food security outcomes, such as beneficiaries’ income, assets, and overall consumption. When examining findings about dietary diversity, we relied on indices and metrics that had been created for this purpose; we did not assess results for individual food groupings, as these varied greatly between studies and did not lend themselves to the overall methodology that we employed. (App. II summarizes the results of our review.) The researchers who helped us identify relevant studies that met our criteria generally stated that they believed the 14 studies we selected constituted a reasonable body of evidence that no one modality significantly outperforms the other and that all modalities can lead to improvements in food security. However, one researcher reported that the evidence base does not allow for generalizable conclusions about the situations in which specific modalities are most appropriate. We also considered whether the 14 studies examined the relative costs of the modalities, and in the studies that did, we considered which modality was found to be least expensive at delivering the same level of assistance. The studies employed a variety of data and methods to estimate relative costs. For example, some studies examined cost-efficiency, others examined cost-effectiveness, and still others examined both. In addition, some studies used a costing method adapted from the health economics field that involves identifying the costs of specific “activities” required to implement a modality, such as staffing, banking, and production. Meanwhile, three studies comparing cash and food transfers examined the costs of both delivering the modalities and purchasing the food beneficiaries consumed. If more than one method was used to estimate costs in a study, we considered the results for all methods used and reported that one modality was the least expensive only if all methods used found it so. Otherwise, we reported that the results were mixed. In one instance, a study examined four projects in one country: one project provided food transfers, another provided cash transfers, and two others provided a mix of cash and food transfers. This study examined both the costs and the cost-efficiency of the transfers. In that instance, because the results differed by project and method used, we could not determine which modality was least expensive. Our review of the 14 studies was not intended to assess any individual donor such as USAID and thus the conclusions we analyzed cannot be inferred to assess the performance of USAID’s projects. In addition, several of the projects examined were pilots conducted to assess the comparative performance of food and cash on household food security. The projects that the studies examined varied in terms of contextual factors such as the projects’ purposes and goals, projects’ design and outcomes examined. While the studies demonstrate that no modality consistently outperformed the others across a range of settings and situations, this variance does not allow us to determine the precise conditions in which one modality might outperform the others. To obtain context and background, we also considered additional studies on cash- based assistance that did not meet our criteria for inclusion in our systematic review but provided insights into projects that used cash transfers, food vouchers, food transfers, or a combination of these modalities. (See app. III for a list of these additional studies.) We conducted this performance audit from June 2015 to September 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to reviewing 14 rigorous studies that compared the impacts of cash transfers, food vouchers, and food transfers on food security outcomes, we consulted the following 23 studies for insights into projects that included one or more of these modalities. In some cases, these studies examined large-scale cash transfer or food voucher projects or the relative costs of the modalities and were recommended by food security researchers we interviewed. 1. Attanasio, David, Erich Battistin, and Alice Mesanard. “Food and Cash Transfer: Evidence from Colombia.” The Economic Journal, vol. 122 (March 2011): 92-124. 2. Bailey, Sarah, and Paul Harvey. “State of Evidence on Humanitarian Cash Transfer.” Background note for the High Level Panel on Humanitarian Cash Transfers, Overseas Development Institute. March 2015. 3. Bailey, Sarah, and Sophie Pongracz. “Humanitarian Cash Transfers: Cost, Value for Money and Economic Impact.” Background note for the High Level Panel on Humanitarian Cash Transfers, Overseas Development Institute. March 2015. 4. Creti, Pantaleo. The Impact of Cash Transfers on Local Markets: A Case Study of Unstructured Markets in Northern Uganda. Cash Learning Partnership, April 2010. 5. Creti, Pantaleo. The Voucher Programme in the Gaza Strip: Mid- Term Review (Final Report). Commissioned by the World Food Programme and Oxfam, United Kingdom, 2011. 6. De Sardan, Jean Pierre, Hannatou Adamou, Oumarou Hamani, Younoussi Issa, Nana Issaley, and Issaka Oumarou. “Cash Transfers in Niger: The Manna, the Norms and the Suspicions.” Translation of a paper undertaken by LASDEL and originally published in French. 2013. 7. Doocy, Shannon, and Hannah Tappis. Cash-Based Approaches in Humanitarian Emergencies: A Systematic Review. Funded by the U.K.’s Department of International Development (DFID). Johns Hopkins. 2015. 8. Doocy, Shannon, Emily Lyle, and Hanna Teppis. Emergency Transfers in Northern Syria. An Economic Evaluation of GOAL Food Assistance Programs in Idleb Governorate. Johns Hopkins, Bloomberg School of Public Health, September 2015. 9. Dunn, Sophia, Mike Brewin, and Aues Scek. Cash and Voucher Monitoring Group: Final Monitoring Report of the Somalia Cash and Voucher Transfer Programme. London: Humanitarian Policy Group, Overseas Development Institute, 2014. 10. Gentilini, Ugo. Our Daily Bread: What is the Evidence on Comparing Cash versus Food Transfers? Social Protection & Labor Discussion Paper No. 1420. Washington, D.C.: World Bank Group, July 2014. 11. Gentilini, Ugo. “Revisiting the ‘Cash vs. Food’ Debate: New Evidence for an Old Puzzle.” World Bank Research Observer, vol. 31 (2016): 135-167. 12. Gentilini, Ugo. The Other Side of the Coin: The Comparative Evidence of Cash and In-Kind Transfers in Humanitarian Situations. Washington, D.C.: World Bank Group, July 2016. 13. Gilligan, Daniel, Melissa Hidrobo, John Hoddinott, Shalini Roy, and Benjamin Schwab. “Much Ado about Modalities: Multicountry Experiments on the Effects of Cash and Food Transfers on Consumption Patterns.” International Food Policy and Research Institute paper prepared for the Agriculture & Applied Economics Association Annual Meeting, Minneapolis, July 2014. 14. Hedlund, Kerren, Ben Allen, Maria Bernandez, Muriel Cala, Saul Guerrero, Chloe Milloz Baudy, Juilien Morel, Panos Navrozidis, Silkie Pietzsch, and Michael Yemene. Meta-Evaluation of ACF Fresh Food Voucher Programmes. ACF International, with funding from the Cash Learning Partnership and the European Commission on Humanitarian Aid and Civil Protection. January 2012. 15. Hoddinott, John, Daniel Gilligan, Melissa Hidrobo, Amy Margolies, Shalini Roy, Susanna Sandström, Benjamin Schwab, and Joanna Upton. Enhancing WFP’s Capacity and Experience to Design, Implement, Monitor and Evaluate Vouchers and Cash Transfer Programmes. Study Summary. International Food Policy Research Institute, 2013. 16. Husain, Arif, Jean-Martin Bauer, and Susanna Sandstrӧm. Economic Impact Study: Direct and Indirect Impact of the WFP Food Voucher Programme in Jordan. World Food Programme, 2014. 17. Kardan, Andrew, Ian MacAuslan, and Ngoni Marimo. Evaluation of Zimbabwe’s Emergency Cash Transfer (ZECT) Program—Final Report. Oxford Policy Management, supported by WFP and Concern Worldwide, 2010. 18. Majewski, Brian, Lois Austin, Katherine George, Carol Ward, and Kurt Wilson. WFP’s 2008 Cash and Voucher Policy (2008-2014): A Policy Evaluation. Evaluation Report—Volume 1. Konterra Group paper for World Food Programme. 2014. 19. Margolies, Amy, and John Hoddinott, “Costing Alternative Transfer Modalities,” IFPRI Discussion Paper 01375. International Food Policy Research Institute, September 2014. 20. Maunder, Nick, Victoria De Bauw, Neil Dillon, Gabrielle Smith, and Sharon Truelove. “Evaluation of the Use of Different Transfer Modalities in ECHO Humanitarian Aid Actions, 2011-2014.” Analysis for Economic Decisions. Evaluation commissioned by the European Commission. 2016. 21. Michelson, Hope, Christopher Barrett, Laura Cramer, Eric Lentz, Megan McGlinchy, Mitchell Morey, and Richard Mulwa. “Cash, Food, or Vouchers? An Application of the Market Information and Food Security Response Analysis Framework in Urban and Rural Kenya.” Food Security, vol. 4 (2012): 455-469. 22. Mountfield, Ben. Unconditional Cash Transfers in Gaza: An External Review. Commissioned by Oxfam, Great Britain. 2012. 23. Poulsen, Lene, Sophia Dunn, Sado Hashi, Mohamed Adnan Ismail, Colleen McMillon, Caroline Tanner and Njoroge Thuo. Somalia Protracted Relief and Recovery Operation: Strengthening Food and Nutrition Security and Enhancing Resilience, June 2012–December 2015. Mid-Term Evaluation Report. World Food Programme, 2015. In addition to the individual named above, Joy Labez (Assistant Director), Sushmita Srikanth (Analyst-in-Charge), David Blanding, Carol Bray, Ming Chen, Martin De Alteriis, Neil Doherty, Mark Dowling, Reid Lowe, Julia Ann Roberts, and Shannon Roe made key contributions to this report. International Food Assistance: Cargo Preference Increases Food Aid Shipping Costs, and Benefits Are Unclear. GAO-15-666. Washington, D.C.: September 25, 2015. International Food Assistance: USAID Should Systematically Assess the Effectiveness of Key Conditional Food Aid Activities. GAO-15-732. Washington, D.C.: September 10, 2015. International Cash-Based Food Assistance: USAID Has Processes for Initial Project Approval but Needs to Strengthen Award Modification and Financial Oversight. GAO-15-760T. Washington, D.C.: March 26, 2015. USAID Farmer-to-Farmer Program: Volunteers Provide Technical Assistance, but Actions Needed to Improve Screening and Monitoring. GAO-15-478. Washington, D.C.: April 30, 2015. International Cash-Based Food Assistance: USAID Has Developed Processes for Initial Project Approval but Should Strengthen Financial Oversight. GAO-15-328. Washington, D.C.: March 26, 2015. International Food Aid: Better Agency Collaboration Needed to Assess and Improve Emergency Food Aid Procurement System. GAO-14-22. Washington, D.C.: March 26, 2014. International Food Aid: Prepositioning Speeds Delivery of Emergency Aid, but Additional Monitoring of Time Frames and Costs Is Needed. GAO-14-277. Washington, D.C.: March 5, 2014. Global Food Security: USAID Is Improving Coordination but Needs to Require Systematic Assessments of Country-Level Risks. GAO-13-809. Washington, D.C.: September 17, 2013. E-supplement GAO-13-815SP. International Food Assistance: Improved Targeting Would Help Enable USAID to Reach Vulnerable Group. GAO-12-862. Washington, D.C.: September 24, 2012. World Food Program: Stronger Controls Needed in High-Risk Areas. GAO-12-790. Washington, D.C.: September 13, 2012. International Food Assistance: Funding Development Projects through the Purchase, Shipment, and Sale of U.S. Commodities Is Inefficient and Can Cause Adverse Market Impacts. GAO-11-636. Washington, D.C.: June 23, 2011. International School Feeding: USDA’s Oversight of the McGovern-Dole Food for Education Program Needs Improvement. GAO-11-544. Washington, D.C.: May 19, 2011. International Food Assistance: Better Nutrition and Quality Control Can Further Improve U.S. Food Aid. GAO-11-491. Washington, D.C.: May 12, 2011. International Food Assistance: A U.S. Governmentwide Strategy Could Accelerate Progress toward Global Food Security. GAO-10-212T. Washington, D.C.: October 29, 2009. International Food Assistance: USAID Is Taking Actions to Improve Monitoring and Evaluation of Nonemergency Food Aid, but Weaknesses in Planning Could Impede Efforts. GAO-09-980. Washington, D.C.: September 28, 2009. International Food Assistance: Local and Regional Procurement Can Enhance the Efficiency of U.S. Food Aid, but Challenges May Constrain Its Implementation. GAO-09-570. Washington, D.C.: May 29, 2009.
For more than 60 years, the United States provided assistance to food-insecure countries primarily in the form of food commodities procured in the United States and transported overseas. In recent years, the U.S. government has increasingly provided food assistance in the form of cash transfers or food vouchers. In fiscal years 2010 through 2015, USAID funding for Emergency Food Security Program (EFSP) for cash transfers and food voucher projects grew from about $76 million to nearly $432 million. GAO was asked to review USAID's monitoring and evaluation of cash-based food assistance. This report examines, among other things, (1) USAID's and implementing partners' processes for monitoring cash transfer and food voucher projects and (2) the extent to which monitoring data reported to USAID can be used to evaluate the performance of such projects. GAO analyzed program data, interviewed relevant officials; and conducted fieldwork in Kenya and Liberia, selected on the basis of criteria such as funding and types of projects. GAO also reviewed the final reports for a nonprobability sample of closed cash transfer and food voucher projects. The United States Agency for International Development (USAID) and its implementing partners have established processes to monitor cash transfer and food voucher projects. To monitor the implementation of these projects, USAID has assigned monitoring roles and responsibilities to staff, is developing country monitoring plans and monitoring tools, and is working to verify information that partners have provided through actions such as conducting site visits, and speaking with beneficiaries. To ensure that assistance is delivered according to their procedures and to the targeted beneficiaries, implementing partners monitor distributions, and interview beneficiaries regarding the distribution of the assistance. In addition, implementing partners conduct postdistribution surveys to gather information about the relevance, efficiency, and effectiveness of the assistance (see figure). Incomplete reporting and weaknesses in certain performance indicators limit USAID's ability to use monitoring data to evaluate cash transfer and food voucher projects' performance. GAO's review of 14 final reports, which USAID requires for each project, found that a majority of the reports lacked required data elements, such as prices for key staple foods. Only 1 report included all 12 required data elements, and the other reports were missing up to 8 elements. As a result, USAID has limited ability to assess the overall performance of these projects. Further, GAO found weaknesses in USAID's indicators for measuring cash and voucher projects' timeliness, cost-effectiveness, and appropriateness. USAID's indicator for timeliness does not track delays in implementation. In addition, the indicator for cost-effectiveness does not include a standardized unit for measuring project costs. Further, the indicator for project appropriateness does not have associated benchmarks for measuring cash transfer and food voucher projects' impact on local markets. As a result, USAID lacks information that would be useful for evaluating the projects' effectiveness relative to that of in-kind food aid. According to standards for internal control in the federal government, management should use quality information, including relevant data from reliable sources, to achieve an agency's objectives. USAID should (1) take steps to ensure compliance with its requirements for data in final reports and (2) strengthen the indicators it uses to measure the timeliness, cost-effectiveness, and appropriateness of cash transfer and food voucher projects. USAID concurred with GAO's recommendations.
Within DOD, the Office of the Under Secretary of Defense for Intelligence (OUSD ) is responsible for coordinating and implementing DOD-wide policies related to access to classified information. Within OUSD (I), the Defense Security Service (DSS) is responsible for conducting background investigations and administering the personnel security investigations program for DOD and 22 other federal agencies that allows industry personnel access to classified information. Two offices are responsible for adjudicating cases involving industry personnel. DSS’s Defense Industrial Security Clearance Office (DISCO) adjudicates cases that contain only favorable information or minor issues regarding security concerns (e.g., some overseas travel by the individual), and the Defense Office of Hearings and Appeals (DOHA) within the Defense Legal Services Agency adjudicates cases that contain major security issues (e.g., an individual’s unexplained affluence or criminal history). As with military members and federal workers, industry personnel must obtain a security clearance to gain access to classified information, which is categorized into three levels: top secret, secret, and confidential. The level of classification denotes the degree of protection required for information and the amount of damage that unauthorized disclosure could reasonably be expected to cause to national defense or foreign relations. For top secret information, the expected damage that unauthorized disclosure could reasonably be expected to cause is “exceptionally grave damage;” for secret information, it is “serious damage;” and for confidential information, it is “damage.” Individuals who need access to classified information over a long period are required to periodically renew their clearance (a reinvestigation). The time frames for reinvestigations are 5 years for top secret clearances, 10 years for secret clearances, and 15 years for confidential clearances. To ensure the trustworthiness, judgment, and reliability of industry personnel in positions with access to classified information, DOD relies on a three-stage personnel security clearance process. (See fig. 1.) This process, which is essentially the same for industry personnel as it is for military members and federal employees, entails (1) determining that the position requires a clearance and, if so, submitting a request for a clearance to DSS; (2) conducting an initial investigation or a reinvestigation; and (3) using the investigative report to determine eligibility for access to classified information—a procedure known as “adjudication.” In the preinvestigation stage, if a position requires a clearance, then the industrial contractor must request an investigation of the individual. The request could be the result of needing to fill a new position for a recent contract, replacing an employee in an existing position, renewing the clearance of an individual who is due for reinvestigation, or processing a request for a future employee (up to 180 days) in advance of the hiring date. Once the requirement for a security clearance is established, the industry employee completes a personnel security questionnaire, and the industrial contractor submits it to DSS. All industry requests for a DOD-issued clearance are submitted to DSS, while requests for military members and federal employees are submitted to either DSS or OPM. In the investigation stage, DSS, OPM, or one of their contractors conducts the actual investigation of the industry employee by using standards that were established governmentwide in 1997 and implemented by DOD in 1998. As table 1 shows, the type of information gathered in an investigation depends on the level of clearance needed and whether an initial investigation or a reinvestigation is being conducted. For either an initial investigation or a reinvestigation for a confidential or secret clearance, investigators gather much of the information electronically. For a top secret clearance, investigators gather additional information that requires much more time-consuming efforts, such as traveling, obtaining police and court records, and arranging and conducting interviews. DSS’s Personnel Investigations Center forwards the completed investigative report to DISCO. In the adjudicative stage, DISCO uses the information from the investigative report to determine whether an individual is eligible for a security clearance. If the report is determined to be a “clean” case—a case that contains no potential security issue or minor issues—then DISCO adjudicators determine eligibility for a clearance. However, if the case is determined to be an “issue” case—a case containing information that might disqualify an individual for a clearance (e.g., foreign connections or drug- or alcohol-related problems)—then DISCO forwards the case to DOHA adjudicators for the clearance-eligibility decision. Regardless of which office renders the adjudication, DISCO issues the clearance- eligibility decision and forwards this determination to the industrial contractor. All adjudications are based on 13 federal adjudicative guidelines established governmentwide in 1997 and implemented by DOD in 1998 (see app. II). DISCO and DOHA serve as central adjudication facilities for industry personnel, whereas DOD uses eight other central adjudication facilities to approve, deny, or revoke eligibility for a security clearance for military members and federal employees. DOD’s security clearance backlog for industry personnel was roughly 188,000 cases, and the time needed to conduct an investigation and determine eligibility for a clearance had increased by 56 days during the last 3 fiscal years. As of March 31, 2004, DSS identified more than 61,000 overdue but not submitted reinvestigations and about 127,000 investigations or adjudications that had been started but not completed within set time frames. From fiscal year 2001 through fiscal year 2003, the average time that it took to conduct an investigation and determine clearance eligibility for industry personnel increased from 319 days to 375 days. DOD’s delays in conducting an investigation and determining clearance eligibility can, among other things, increase national security risks and the costs to the federal government of contractor performance on defense contracts. As of March 31, 2004, the industry personnel backlog was roughly 188,000 cases. DOD identified more than 61,000 reinvestigations that were overdue but had not been submitted, over 101,000 backlogged investigations, and over 25,000 backlogged adjudications. For the 25,000 completed investigations awaiting adjudication, DSS found that over 19,000 of the cases were at DISCO and more than 6,300 of the cases were at DOHA. However, as of March 31, 2004, DOHA independently reported that it had eliminated its adjudication backlog. A complicating factor in determining the size of the industrial personnel backlog is that the backlog may be underestimated, since DSS had opened relatively few cases between October 1, 2003, and March 31, 2004, in anticipation of the authorized transfer of the investigative function from DSS to OPM. DSS had received, but not opened, almost 69,200 new industry personnel requests received in the first half of fiscal year 2004. Cases received in fiscal year 2004, which have already exceeded the set time frames for completing the investigation, are included in the 101,000 backlogged investigations identified above. To view the industry personnel backlog in its proper context, we compared this backlog to the DOD-wide clearance backlog as of September 30, 2003, the date of the most recent DOD-wide data. For the preinvestigation stage, DOD did not know the total number of personnel DOD-wide with overdue requests for reinvestigation that had not been submitted—even though their clearances exceeded the governmentwide time frames for submitting reinvestigations. (See fig. 2.) Any request for a reinvestigation that has not been submitted within a specified time frame is overdue and considered part of the backlog. As noted in our February 2004 report, DOD could not estimate the number of military members and federal employees who had not requested a reinvestigation. Similarly, in a prior report, we indicated that DOD estimated its backlog of overdue but not submitted reinvestigations at 300,000 cases in 1986 and 500,000 cases in 2000. Because DOD’s Case Control Management System has limited query capability, DOD was unable to identify the number of overdue but not submitted industrial personnel reinvestigations as of September 30, 2003. Although this system can identify overdue reinvestigations for industry personnel when queried at a specific point in time, it does not allow DOD to identify the number of military members and federal employees whose reinvestigations are overdue but not submitted at any time. The size of the total DSS-estimated backlog for industry personnel doubled during the 6-month period ending on March 31, 2004. Table 2 compares the sizes of the investigative and adjudicative backlogs at the end of fiscal year 2003 with the end of the first-half of fiscal year 2004. This comparison does not include the backlog of overdue reinvestigations that have not been submitted, because DSS was not able to estimate that backlog as of September 30, 2003. As of September 30, 2003, the estimated size of the investigative backlog for industry personnel amounted to roughly 44,600 cases, or 17 percent of the larger DOD-wide backlog of approximately 270,000 cases, which included military members, federal employees, and industry personnel. (See fig. 2.) DSS’s time frames for completing investigations range from 75 days to 180 days, depending on the investigative requirements. For instance, an initial secret investigation is required to be completed within 75 days, while a secret or top secret reinvestigation has to be completed within 180 days. Some requests for investigations receive priority over other requests. For example, requests for initial clearances receive priority over requests for reinvestigations, since individuals awaiting initial clearances cannot work whereas individuals who already have clearances that are due for reinvestigation can continue to work. As of September 30, 2003, the estimated size of the adjudicative backlog for industry personnel totaled roughly 17,300 cases. This number represented 19 percent of the roughly 93,000 cases in the DOD-wide adjudicative backlog on that date. Of the 17,300 industry personnel cases, some 12,800 were awaiting adjudication at DISCO (most of which were reinvestigations) and the remaining 4,500 cases were awaiting adjudication at DOHA. As of March 31, 2004, DOHA independently reported that it had totally eliminated this backlog of cases that had been awaiting initial adjudication by its security specialists. Typically, about 14 to 20 percent of the cases received by DISCO are eventually sent to DOHA for adjudication. As shown in figure 2, DISCO and DOHA use different time frames for identifying cases as backlogged. For example, DISCO uses 3 days for initial clearances and 30 days for reinvestigations, while DOHA uses different time frames on the basis of the number of cases on hand for 30 days that exceed a steady workload of 2,150 cases each month. If DISCO’s time frames were applied to investigations awaiting adjudication at DOHA, then DOHA’s backlog would have been larger than that reported at the end of fiscal year 2003. In the 3-year period from fiscal year 2001 through fiscal year 2003, the average time that DOD took to determine clearance eligibility for industry personnel rose from 319 days to 375 days, an increase of 18 percent. (See tables 3 and 4.) In other words, during fiscal year 2003, industry personnel waited an average of more than 1 year from the time DSS received a personnel security questionnaire to the time that DISCO issued an eligibility determination. In fiscal year 2003, it took DOD an average of 332 days to determine eligibility for “clean” cases, that is, those that had little or no potential security issues. (See table 3.) By comparison, it took DOD an average of 615 days to complete “issue” cases that contained potentially more serious security matters. This time period included DSS’s investigation, DISCO’s identification of potential issues and its forwarding of an issue case to DOHA, DOHA’s need to request additional investigation in some instances, and DOHA’s adjudication of the case. The 615-day average for issue cases is an overestimate because of problems with DSS’s Case Control Management System. The system is unable to distinguish between the end of the investigative and adjudicative processes to determine eligibility for a clearance and the continuing appeals process that may follow the denial of a clearance request or the revocation of a clearance. Table 4 shows that from fiscal year 2001 through fiscal year 2003, the average number of days it took to conduct an investigation and determine eligibility for a security clearance for industry personnel increased by 56 days, or 18 percent. Delays in renewing security clearances for industry personnel and others who are doing classified work caused by the backlog can lead to a heightened risk of national security breaches. Such breaches involve the unauthorized disclosure of classified information, which can cause up to “exceptionally grave damage” to national security. In a 1999 report, the Joint Security Commission II pointed out that delays in initiating reinvestigations create risks to national security because the longer the individuals hold clearances, the more likely they are to be working with critical information and systems. In addition, delays in determining security clearance eligibility for industry personnel can affect the timeliness, quality, and cost of contractor performance on defense contracts. A 2003 Information Security Oversight Office report identified concerns about the length of time required to process industrial security clearances. According to the report, industrial contractor officials who were interviewed said that delays in obtaining clearances cost industry millions of dollars per year and affect personnel resources. Interviewees reported having difficulty in filling sensitive positions and retaining qualified personnel. The report also stated that delays in the clearance process hampered industrial contractors’ ability to perform duties required by their contracts. According to industry contractors, these delays increased the amount of time needed to complete national-security-related contracts. In interviews we conducted during our review, industrial contractors told us about cases in which their company hired competent applicants who already had the necessary security clearances, rather than individuals who were more experienced or qualified but did not have a clearance. As a result, according to industry association officials, industrial contractors may not be performing government contracts with the most experienced and best-qualified personnel, thus diminishing the quality of the work. Moreover, industry association representatives told us that defense contractors might offer monetary incentives to attract new employees with clearances—for example, a $15,000 to $20,000 signing bonus for individuals with a valid security clearance, and a $10,000 bonus to current employees who recruit a new employee with a clearance. In turn, the recruit’s former company may need to backfill the position, as well as settle for a lower level of contract performance while a new employee is found, obtains a clearance, and is trained. In addition, defense contractors may hire new employees and begin paying them, but not be able to assign any work to them— sometimes for a year or more—until they obtain a clearance. Contractors may also incur lost-opportunity costs if prospective employees decide to work elsewhere rather than wait to get a clearance. We were told that contractors might pass these operating costs on to the federal government—and the taxpayer—in the form of higher bids for defense contracts. A number of impediments hinder DOD’s efforts to eliminate the clearance backlog for industry personnel and reduce the time needed to conduct an investigation and determine eligibility for a clearance. Impediments— similar to those we identified DOD-wide in our February 2004 report—also affect industry personnel and include large investigative and adjudicative workloads resulting from a large number of clearance requests in recent years and an increase in the proportion of requests requiring top secret clearances, inaccurate workload projections, and insufficient investigative and adjudicative workforces to handle the large workloads. The underutilization of reciprocity is an impediment that industrial contractors cited as an obstacle to timely eligibility determinations. The effects of past conditions, such as the backlog itself, problems with DSS’s Case Control Management System, and additional national investigative requirements, also have been identified by DOD officials as impediments to timely eligibility determinations. Furthermore, DOD does not have a management plan that could help it address many of these impediments in a comprehensive and integrative manner. A major impediment is the large—but inaccurately projected—number of requests for security clearances for industry personnel, military members, and federal employees. A growing number of these requests are for top secret clearances, which require more effort to process. The large and inaccurately projected investigative and adjudicative workloads for industry personnel cases must be viewed in the context of increasing DOD-wide and governmentwide clearance requirements. The large number of requirements is found in the form of both the number of requests and a growing portion of the requests requiring top secret clearances. Also, DOD has been unable to accurately project the number and type of clearances required for industry personnel. Additional inaccuracy—a potential surge in clearance requests—could result when the Joint Personnel Adjudication System (JPAS) is fully implemented and DOD is able to identify overdue but not submitted reinvestigations DOD-wide. The large number of clearance requests that DOD receives annually taxes a process that already is experiencing backlogs and delays. These requests are for industry personnel, as well as for military members and federal employees. In fiscal year 2003, DOD submitted over 775,000 requests for investigations to DSS and OPM. This figure included almost 143,000 requests for investigations of industry personnel. According to OPM officials, OPM has received an unprecedented number of requests for investigations governmentwide since September 2001 and has identified this large number as the primary reason for delays in granting clearances. Table 5 shows an increase in the number of DOD eligibility determinations for industry personnel made during each of the last 3 years. DOD issued about 63,000 more eligibility determinations for industry personnel in fiscal year 2003 than it did 2 years earlier, an increase of 174 percent. During the same period, the average number of days required to issue an eligibility determination for industry personnel grew by 56 days, or about 18 percent. (See table 4.) In other words, the increase in the average wait time was small compared to the increase in the number of cases. Fiscal year 2001 is an important baseline for examining changes in clearance processing because (1) major problems with DSS’s Case Control Management System had been largely corrected and (2) the end of fiscal year 2001 occurred shortly after the September 11, 2001, terrorist attacks, which prompted an increase in clearance requests. Table 6 shows that from fiscal year 2001 through fiscal year 2003, the number of clearance eligibility determinations for industry personnel increased by more than 63,000 cases, or 174 percent. Beginning with fiscal year 1995 through fiscal year 2003, the proportion of all requests requiring top secret clearances for industry personnel grew from 17 to 27 percent. As indicated earlier, top secret clearances require more information than that needed for secret clearances. According to OUSD (I), top secret clearances take 8 times more investigative effort to complete and 3 times more adjudicative effort to review than do secret clearances. In addition, a top secret clearance must be renewed twice as often—every 5 years instead of every 10 years for a secret clearance. The full effect of requesting a top secret, rather than a secret, clearance thus is 16 times the investigative effort and 6 times the adjudicative effort. The increased demand for top secret clearances also has budget implications for DOD. In fiscal year 2003, security investigations obtained through DSS cost $2,640 for an initial investigation for a top secret clearance, $1,591 for a reinvestigation of a top secret clearance, and $328 for an initial investigation for a secret clearance. Thus, over a 10-year period, DOD would spend $4,231 (in current-year dollars) to investigate and reinvestigate an industry employee for a top secret clearance, a cost 13 times higher than the $328 it would require to investigate an individual for a secret clearance. DOD’s inability to accurately estimate the number and type of clearance requests that it will have to process for industry personnel during the next fiscal year is part of a bigger DOD-wide workload-estimation problem. For fiscal year 2001, DOD estimated that it would receive about 850,000 requests for clearances DOD-wide; however, the actual number of submissions was 18 percent lower than estimated. In contrast, DOD estimated that it would receive about 720,000 and 690,000 new requests DOD-wide in fiscal years 2002 and 2003, respectively, but the actual numbers of submissions were 19 and 13 percent higher than expected. Although DSS has made efforts to improve its projections of industry personnel security clearance requirements, problems remain. For example, inaccurate forecasts for both the number and type of security clearances needed for industry personnel make it difficult for DOD to plan ahead and to size its investigative and adjudicative workforce to handle the workload and fund its security clearance program. For fiscal year 2003, DSS reported that the actual cost of industry personnel investigations was almost 25 percent higher than had been projected. DOD officials believed that these projections were inaccurate primarily because DSS received a larger proportion of requests for initial top secret background investigations and top secret reinvestigations, both of which require considerably more effort to process. Since fiscal year 2001, DSS has conducted an annual survey of security officers at cleared contractor facilities over which DSS has cognizance to obtain their best estimates of the number of background investigations they would require over the next 7 years. Using those estimates and historical data, DSS then prepares its annual security clearance projections for industry personnel. For fiscal year 2003, DSS asked each facility for the number and types of clearances that they would need. DSS said that about 25 percent of the approximately 11,000 cleared contractor facilities voluntarily responded to this request, but that 80 to 90 percent of the facilities with the largest dollar contracts responded. DSS officials attributed the inaccurate projection estimate to the use of some industry employees on more than one contract and often for different defense agencies; the movement of employees from one company to another; and unanticipated world events, such as the September 11, 2001, terrorist attacks. Currently, DSS does not receive data from DOD’s acquisition community that issues the contracts—primarily military service and defense agency acquisition managers—to help DSS more accurately forecast the number and type of industrial personnel security clearances that would be required to implement or support their particular acquisition programs and activities. DOD is developing a plan to link the number of investigations required for contract performance to an electronic database with personnel clearance information, and to require that the contracting officer authorize the number and type of investigations required. According to DOD, this will allow DSS to better monitor requirements and tie them to the budget process. Also, linking the electronic personnel clearance information database with the contract database maintained by the acquisition community would tie the security clearance process more closely to the acquisition process. DOD may experience a surge in security clearance requests DOD-wide when JPAS is fully implemented. This system will enable DOD to identify overdue reinvestigations that have not been submitted. However, any surge in the number of unexpected reinvestigations may be identified too late to have the extra workload planned and budgeted for the next fiscal year. DOD’s inability to fully anticipate the number of reinvestigations that will be submitted is the result of continued delays in implementing JPAS, a system that DOD’s Chief Information Officer has identified as a mission critical system. In response to a recommendation in our August 2000 report, DOD said that JPAS would be implemented in fiscal year 2001 and would provide an automated means of tracking and counting overdue but not submitted requests for reinvestigations. At the time of our February 2004 report, which again recommended the implementation of JPAS, OUSD (I) officials said that they expected to fully implement JPAS during January 2004. Currently, OUSD (I) officials project that JPAS will be fully implemented sometime in fiscal year 2004. Insufficient investigative and adjudicative workforces, given the current and projected workloads, serve as additional barriers to eliminating the backlog and reducing security clearance processing times for industry personnel. DOD partially concurred with our February 2004 recommendation to identify and implement steps to match the sizes of the investigative and adjudicative workforces to the clearance request workload. DOD—like the rest of the federal government—is competing for a limited number of investigative staff. In contrast, DOD has more control over its adjudicative capacity and has taken steps to increase those resources. The limited number of investigative staff available to process requests from DOD and other government agencies hinders DOD’s efforts to eliminate the backlog and issue timely clearances for industry personnel. According to an OPM official, DOD and OPM together need roughly 8,000 full-time-equivalent investigative staff to eliminate the security clearance backlogs and deliver timely investigations to their customers. However, in our February 2004 report, we estimated that DOD and OPM have around 4,200 full-time-equivalent investigative staff who are either federal employees or contract investigators, slightly more than half as many as needed. In addition to having too few investigators, DOD may experience a short-term decrease in productivity in the near future as DSS investigative employees are pulled away from their investigations to receive training on OPM’s case management system and investigative procedures. In December 2003, advisors to the OPM Director expressed concerns about financial risks associated with the transfer of DSS’s investigative functions and 1,855 investigative staff authorized in the National Defense Authorization Act for Fiscal Year 2004. The advisors therefore recommended that the transfer not occur, at least during fiscal year 2004. On February 6, 2004, DSS and OPM signed an interagency agreement that leaves the investigative functions and DSS personnel in DOD and provides DSS personnel with training on OPM’s case management system and investigative procedures as well as access to that system. According to our calculations, if all 1,855 DSS investigative employees complete the 1-week training program as planned, the loss in productively will be equivalent to 35 person-years of investigator time. Also, other short-term decreases in productivity will result while DSS’s investigative employees become accustomed to using OPM’s system and procedures. Similarly, an adjudicative backlog of industry personnel cases developed because DISCO and DOHA did not have an adequate number of adjudicative personnel on hand. DOD personnel and industry officials identified several reasons why adjudicator staff have not been able to process requests within their established time frames. These include an increase in the number of investigations being sent to DISCO and DOHA as a result of the September 11, 2001, terrorist attacks and the larger number of completed investigations stemming from DOD’s contract with OPM and private-sector investigation companies. The adjudicative backlog also resulted from problems in the operations of DSS’s Case Control Management System. DISCO and DOHA have taken steps to decrease the backlog and delays by augmenting their adjudicative staff. As of September 30, 2003, DISCO had 56 nonsupervisory adjudicators on board, and 6 additional nonsupervisory adjudicator applicants are currently undergoing investigations for their security clearances. By contrast, only 33 nonsupervisory adjudicators were available in 2001. To achieve part of this increase in the number of adjudicators, DISCO moved nonadjudicative customer service employees into adjudicative positions and filled the vacated positions with contract personnel. In addition, DISCO authorized overtime for its adjudicative staff. As of September 30, 2003, DOHA had 23 permanent federal adjudicators as well as 46 temporary adjudicators hired specifically to help reduce its adjudicative backlog. In 2001, after DOHA identified a growing adjudicative workload of industry personnel cases that exceeded its capacity, it received authority to hire 46 additional term-appointment adjudicators. After establishing this plan to eliminate its backlog of cases awaiting initial adjudication by its security specialists, DOHA requested authority to hire additional permanent adjudicators to ensure that a backlog would not recur. While the reciprocity of security clearances within DOD has not been a problem for industry personnel, reciprocity of access to certain types of information and programs within the federal government has not been fully utilized, thereby preventing some industry personnel from working and increasing the workload on already overburdened investigative and adjudicative staff. According to DOD and industry officials, a 2003 Information Security Oversight Office report on the National Industrial Security Program, and our analysis, reciprocity of clearances appears to be working throughout most of DOD. However, the same cannot be said for access to sensitive compartmented information and special access programs within DOD or transferring clearances and access from DOD to other agencies. Similarly, a recent report by the Defense Personnel Security Research Center concluded that aspects of reciprocity for industrial contractors appear not to work well and that the lack of reciprocity between special access programs was a particular problem for industry personnel, who often work for many of these programs simultaneously. The extent of the problems that are caused by the lack of full reciprocity is unknown. In 2001, the Defense Personnel Security Research Center proposed collecting quantitative data on the number and type of personnel affected by reciprocity. However, the center determined that the differences in how the various agencies handled tracking these personnel situations proved so great and the databases they used so various that center researchers could not overcome these incompatibilities in the time and with the resources they had for the study. This situation has occurred despite the establishment in 1997 (and implementation by DOD in 1998) of governmentwide investigative standards and adjudicative guidelines. In 1999, the interagency Joint Security Commission II noted, “With these standards and guidelines in place, there is no longer a legitimate reason to investigate or readjudicate when a person moves from one agency’s security purview to another.” More recently, the chair of the federal interagency Personnel Security Working Group indicated that the lack of full reciprocity is a major concern governmentwide, not just within DOD. Industry association officials told us that reciprocity of access to certain types of information and programs, especially the lack of full reciprocity in the intelligence community, is one of the top concerns of their members. One association provided us with several examples of access problems that industry personnel with DOD-issued security clearances face when working with intelligence agencies. For example, the association cited different processes and standards used by intelligence agencies, such as guidelines for (1) the type of investigations and required time frames, (2) type of polygraph tests, and (3) refusal to accept adjudication decisions made by other agencies. Industry association officials stated that these access problems are becoming more common, especially for firms with multiple contracts with different intelligence agencies. Industry officials identified reciprocity concerns for the following situations, among others: Sensitive compartmented information and special access programs— The DOD directive that establishes policy, responsibilities, and procedures for industry employee clearances explicitly provides that the directive “oes not apply to cases for access to sensitive compartmented information or a special access program.” The procedures used in determining access to sensitive compartmented information and special access programs are different from those used in the normal clearance process. These procedures may involve applying more selective and stringent investigative and adjudicative criteria. The reciprocity of sensitive compartmented information eligibility determinations is left up to each organization or agency that may have additional investigative requirements that must be met (e.g., a polygraph test) prior to granting access. While DOD requires that special access program eligibility determinations for military members and federal employees be mutually and reciprocally accepted by all DOD components, this requirement does not apply to industry personnel. DOD components and some of the agencies serviced by DISCO do not always accept the interim clearances that DISCO issues to industry employees. DISCO provides interim clearances when an individual’s case does not identify any potential security issues after a review of initially gathered information. DISCO reported that it issues interim clearances to about 95 percent of those industry personnel applying for a secret clearance within 3 days of receiving the clearance request. However, according to industrial contractors, their ability to use industry personnel with interim clearances on some contracts but not on others limits their staffing options. In addition, DSS and contractor association officials told us that some personnel with an interim clearance could not start work because an interim clearance does not provide access to specific types of national security information, such as sensitive compartmented information, special access programs, North Atlantic Treaty Organization data, and restricted data. Industry associations told us that intelligence agencies do not accept DOD’s waivers, even with a letter of consent from the employee’s former company or a verification letter by the agency that requested the original investigation and granted the employee the clearance. To eliminate the need to perform another investigation, the Office of the Secretary of Defense may use a waiver to reinstate or convert a security clearance under certain circumstances. For example, a security clearance can be converted if an individual leaves the federal government and subsequently begins to work for an industrial contractor, provided that (1) no more than 24 months have elapsed since the date the clearance was terminated, (2) there is no known adverse information, and (3) the most recent investigation meets both the scope and completion time frame for the clearance being reinstated. By using waivers for reinstatements and conversions, DOD can eliminate the need to perform another investigation. Smith Amendment—Many DOD and industry officials view the Smith Amendment as an impediment to reciprocity because people who once worked for DOD or other agencies may not be eligible to work for DOD when it is time to renew their clearance because of selected potential security issues. The Smith Amendment, which applies only to DOD, specifies that DOD should not grant or renew a clearance for anyone who (1) has been sentenced to imprisonment for a term exceeding 1 year, (2) is an unlawful user of or is addicted to a controlled substance, (3) is mentally incompetent, or (4) has been discharged or dismissed from the military under dishonorable conditions. Therefore, a clearance previously granted by another federal agency or through DOD would be ineligible for a subsequent DOD clearance if one or more of the four prohibitions were applicable. However, the Secretary of Defense or one of the Service secretaries may authorize an exception to the Smith Amendment prohibitions, but only in cases where the individual seeking the clearance has been sentenced to imprisonment for a term exceeding 1 year or has been dishonorably discharged from the Armed Forces. Ordinarily, the adjudicators are to consider mitigating factors and available, reliable information about the person—past and present, favorable and unfavorable—in reaching an “overall common sense” clearance-eligibility determination that gives careful consideration to the 13 adjudicative guidelines. (See app. II.) According to the guidelines, any doubt about whether a clearance for access to classified information is consistent with national security is to be resolved in favor of national security. However, under the Smith Amendment, such mitigating factors should not be considered when one or more of the four elements are present in the investigative report on a person applying for a clearance through DOD—unless the Secretary of Defense or one of the Service secretaries issues a waiver. A number of past conditions also serve as impediments to issuing timely eligibility determinations for industry personnel. The backlogs themselves contribute to delays because most new requests for investigations remain largely dormant until earlier requests are completed. Backlogged cases might delay the start of an initial secret clearance, for instance, until 60 days after it is received by DSS. In such a hypothetical situation, DSS would have only 15 days, rather than the full 75 days, to complete the investigation before having the case labeled as “backlog.” Similarly, the adjudicative backlog might lead to a delay in reviewing new investigative reports, thereby increasing the likelihood that a new adjudication will be categorized as “backlog” before an eligibility determination is provided. In addition, problems with DSS’s Case Control Management System during fiscal years 1999 and 2000 affected the processing of security clearances in subsequent years. These problems included limiting the dissemination of leads to investigative staff and, thereby, limiting the flow of completed cases to adjudication facilities, such as DISCO and DOHA. Although DSS officials indicate that the Case Control Management System problems have been corrected, the February 2004 interagency agreement between DSS and OPM allows DOD to replace that system with OPM’s case management system. An OUSD (I) official said that DOD estimates it will save about $100 million over 5 years by avoiding the need to update and maintain DSS’s Case Control Management System. According to DSS officials, additional national investigative requirements, which were implemented by DOD in 1998, have strained nationwide investigative resources. For instance, the current requirement for a secret clearance calls for investigative staff to conduct national agency checks, local area checks, and a credit check. Previously, a secret clearance required only national agency checks. DOD has had over 5 years to address this issue and allocate sufficient resources to handle the additional requirements. Currently, DOD has numerous plans to address pieces of the backlog problem but does not have an overall management plan to eliminate permanently the current investigative and adjudicative backlogs, reduce the delays in determining clearance eligibility for industry personnel, and overcome the impediments that could allow such problems to recur. DOD has a plan to engineer a business process for personnel security, transform DSS as an agency, complete and closeout DSS’s old investigative work, and decommission DSS’s Case Control Management System. DOD also has a transition plan to transfer DSS’s investigative function to OPM. The terms and conditions of that transfer are contained in the Memorandum of Understanding between DOD and OPM (Jan. 24, 2003). Because the transition has not occurred yet, DSS signed the Interagency Agreement with OPM (Feb. 6, 2004) that leaves the investigative functions and DSS personnel in DOD and provides DSS personnel with training on OPM’s case management system and investigative procedures as well as access to that system. Finally, DSS has a draft Fiscal Year 2004 Performance Plan (Mar. 25, 2004) that is intended to serve as an interim plan pending final implementation of DSS’s strategic plan as a transformed agency. Rather than including specific performance measures seen in previous plans, this plan provides an accounting of milestones that must be achieved for the agency’s transformation. None of these plans address eliminating permanently the investigative and adjudicative backlogs, reducing the delays in conducting investigations and determining eligibility for clearances, or overcoming the impediments. In addition, none of these plans address budgets, personnel resources, costs, or potential obstacles and options for overcoming the obstacles to eliminate the backlog and reduce the delays. DOD’s numerous plans do not include establishing processwide objectives and outcome-related goals; setting priorities; identifying resources; establishing performance measures; and providing milestones for reducing, and eventually eliminating, the backlog and delays. The principles of the Government Performance and Results Act of 1993 provide federal agencies with a basis for such a results-oriented framework that includes setting goals, measuring performance, and reporting on the degree to which goals are met. DOD and industry association officials have suggested a number of initiatives to reduce the backlog and delays in conducting an investigation and issuing eligibility for a security clearance. They indicated that these steps could supplement actions that DOD has implemented in recent years or has agreed to implement as a result of our recommendations or those of others. Even if positive effects would result from these initiatives, other obstacles, such as the need to change investigative standards, coordinate these policy changes with other agencies, and ensure reciprocity, could prevent their implementation or limit their use. Phased periodic reinvestigations could make staff available for more productive uses. A phased approach to periodic reinvestigations involves conducting a reinvestigation in two phases; a more extensive reinvestigation would be conducted only if potential security issues were identified in the initial phase. Table 7 identifies proposed sources of information for both parts of a phased periodic reinvestigation. The more productive sources for investigative leads are shown in phase 1. Investigative staff would gather information from phase 2 sources only in those cases where potential security issues were uncovered in phase 1. Recent research has shown that periodic reinvestigations for top secret clearances conducted in two phases can save at least 20 percent of the normal investigative effort with almost no loss in identifying critical issues for adjudication. This research included phasing analyses conducted by the Defense Personnel Security Research Center with 4,721 reinvestigations for top secret clearances, a pilot test conducted by DSS, independent research at the Central Intelligence Agency and National Reconnaissance Office, and an evaluation of DSS’s implementation of a phased reinvestigation in fiscal year 2003 conducted by the Defense Personnel Security Research Center. This research has shown that the most productive sources (phase 1 sources) can be used to identify investigations in which the least productive sources (phase 2 sources) are likely to yield issue information. Analyses showed a phased approach missed very little potential security issue information and identified all of the cases in which agencies took some form of action against individuals (e.g., a suspension of their clearance or warnings, monitoring, or reprimands). According to DSS, this initiative is designed to use the limited investigative resources in the most productive manner and reduce clearance-processing time by eliminating the routine use of low-yield information sources on many investigations and concentrating information-gathering efforts on high-yield sources. Research conducted by the Defense Personnel Security Research Center suggests the phased periodic reinvestigation represents a way of balancing the risks of a rare missed issue and the costs associated with a normal reinvestigation. While analyses have not been conducted to evaluate how the implementation of phasing would affect the investigative backlog, the implementation of phasing could be a factor in reducing the backlog by decreasing some of the hours of fieldwork required in some reinvestigations. Even if additional testing confirms promising earlier findings that the procedure very rarely fails to identify critical issues, several obstacles could prevent the implementation or limit the use of this initiative. First, the phased reinvestigation does not comply with the Investigative Standards for Background Investigations for Access to Classified Information (Standard C). Currently, Standard C mandates the same investigative scope for all reinvestigations for top secret clearances, whereas the phased approach uses different standards for clean versus potential issue cases. Second, any change in Standard C would necessitate a corresponding change in the Code of Federal Regulations. Third, without modification of Standard C, reciprocity problems could result if some agencies use the phased reinvestigation and other agencies refuse to accept eligibility determinations based on it. DOD is now actively working to change Standard C so that a phased reinvestigation would be an option under the national standards. Single adjudicative facility for industry could reduce adjudicative time. Under this initiative, DOD would consolidate DOHA’s adjudicative function with that of DISCO to create a single adjudicative facility for all industrial contractor cases. At the same time, DOHA would retain its hearings and appeals function. According to OUSD (I) officials, this consolidation would streamline the adjudicative process for industry personnel and make it more coherent and uniform. A single adjudicative facility would serve as the clearinghouse for all industrial contractor-related issues. DOD’s Senior Executive Council is considering this consolidation as part of a larger review of DOD’s security clearance process. From 1991 through 1998, studies by the Defense Personnel Security Research Center, Joint Security Commission, and DOD Office of the Inspector General concluded that the present decentralized structure of DOD’s adjudication facilities had drawbacks. Two of the studies recommended that DOD consolidate its adjudication facilities (with the exception of the National Security Agency). An OUSD (I) official told us that the consolidation would provide greater flexibility in using adjudicators to meet changes in the workload and could eliminate some of the time required to transfer cases from DISCO to DOHA. If the consolidation occurred, DISCO officials said that their operations would not change much, except for adding adjudicators. On the other hand, DOHA officials said that the current division between DISCO and DOHA of adjudicating clean versus issue cases works very well and that combining the adjudicative function for industry into one facility could negatively affect DOHA’s ability to prepare denials and revocations of industry personnel clearances during appeals. They told us that the consolidation would have very little impact on the timeliness and quality of adjudications. Evaluation of the investigative standards and adjudicative guidelines could reveal efficiencies. This initiative would involve an evaluation of the investigative standards used by personnel security clearance investigators to help identify requirements that do not provide significant information relevant to adjudicative decisions. By eliminating the need to perform certain tasks associated with these requirements, investigative resources could be used more efficiently. For example, DSS officials told us that less than one-half of 1 percent of the potential security issues identified during an investigation are derived from neighborhood checks; however, this information source accounts for about 14 percent of the investigative time. The Intelligence Authorization Act for Fiscal Year 2004 required the Secretary of Defense, Director of Central Intelligence, the Attorney General, and Director of OPM to jointly submit to Congress by February 15, 2004, a report on the utility and effectiveness of the current security background investigations and security clearance procedures of the federal government, including a comparison of the costs and benefits of conducting background investigations for secret clearances with the costs and benefits of conducting full field background investigations. At the time of our report, the report mandated in the intelligence act had not been delivered to Congress. The modification of existing investigative standards would involve using risk management principles based on a thorough evaluation of the potential loss of information. Like a phased periodic reinvestigation, this initiative would require changes in the Common Investigative Standards. In addition, the evaluation would need to be coordinated within DOD, intelligence agencies, and others. Requirements-identification improvements could optimize resources and reduce backlog and delays. This initiative would use an automated verification process to identify and validate security clearance requirements for industry personnel. DSS officials stated that a process to verify requirements could help DSS allocate investigative and adjudicative resources to projected workloads, thereby reducing the backlog and delays. DOD is considering implementing this initiative to help project the number and type of clearances that industry may need for a specific acquisition program. According to DSS officials, more stability is needed in workload projections to allow the government and industrial contractors to size their investigative workforces with the workload. This projection becomes more critical because the investigative function is labor-intensive and it can take 1 year to hire and train investigators before they are able to work independently. Implementing this initiative might require additional data gathering and reporting by DOD’s acquisition community that issues contracts—primarily military service and defense agency acquisition managers, especially when contracts are being awarded. Although industry currently provides this information voluntarily, the acquisition community is not required to provide this information. Automated Continuing Evaluation System may result in additional workloads. The last initiative involves testing and eventually implementing the Automated Continuing Evaluation System, which is being developed by the Defense Personnel Security Research Center. This automated assessment tool is designed to provide automated database checks and identify issues of security concern on cleared individuals between the specified periodic reinvestigations. The system does not require an individual to complete any additional paperwork before a query is undertaken. In addition, the system automatically notifies adjudication facilities when an individual with a security clearance engages in an act of security concern. This notification occurs sooner than is currently possible. The system underwent a large-scale pilot program in 2002 and was subsequently modified. Operational field testing began in April 2004. DOD officials acknowledge that the Automated Continuing Evaluation System alone would not help to eliminate the backlog and, in fact, may initially result in larger investigative and adjudicative workloads. However, they maintain that, when combined with the phased periodic reinvestigation, the system could help reduce workloads and the backlog, and ultimately improve personnel security. This initiative would face some of the same obstacles as those raised for a phased periodic reinvestigation—the need to change governmentwide investigative standards and concerns about reciprocity. The backlog of clearances for industry personnel and delays in conducting investigations and determining eligibility for a clearance must be considered in the larger context of DOD-wide backlogs and delays. Many of the impediments and initiatives identified in this report apply to both industry-specific and DOD-wide situations. Taken together, these impediments hamper DOD’s ability to eliminate the security clearance backlog and reduce the amount of time it takes to determine clearance eligibility for industry personnel. DSS is unable to accurately project the number and type of security clearances needed for industry personnel as well as military members and civilian employees. This makes it difficult to determine budgets and staffing for investigative and adjudicative workforces. Without close coordination and cooperation among all interested parties—OUSD (I), DOD components issuing the contracts, industrial contractors, and the acquisition community—inaccurate projections of the number and type of clearance requirements for industrial personnel could continue. The reciprocity of security clearances within DOD has not been a problem for industry personnel; however, reciprocity for access to certain types of information and programs within the federal government has not been fully utilized. As a result, some who already have clearances issued by one agency face delays in starting to work on contracts for other agencies. In addition, the failure to utilize reciprocity unnecessarily increases the investigative and adjudicative workloads on the already overburdened investigative and adjudicative staff. In recent years, DOD has reacted to the impediments in a piecemeal fashion rather than by establishing an integrated approach that incorporates objectives and outcome-related goals, sets priorities, identifies resources, establishes performance measures, and provides milestones for permanently eliminating the backlog and reducing delays. Without such an integrated, comprehensive plan, DOD’s efforts to improve its process for conducting security clearance background investigations and adjudications for industry personnel will likely continue to proceed in a piecemeal fashion. DOD and industry officials have suggested a number of initiatives that could help eliminate the backlog and reduce clearance delays. However, it remains unclear whether any single initiative—or combination of initiatives—can have a direct and immediate impact on the backlog or delays. Even if positive effects would result from these initiatives, other obstacles, such as the need to change investigative standards, coordinate these policy changes with other agencies, and ensure reciprocity, could prevent or limit the implementation of the initiatives. We made recommendations in our February 2004 report on security clearances for DOD personnel that also apply to industry personnel. Among other things, we recommended that the Secretary of Defense direct the Under Secretary of Defense for Intelligence to (1) identify and implement steps to match the sizes of the investigative and adjudicative workforces to the clearance request workload and (2) complete the implementation of the Joint Personnel Adjudication System. In its written response on a draft of that report, DOD partially concurred with the first recommendation and concurred with the second recommendation. Since we have already recommended these actions in the larger context of DOD personnel, we are not repeating them in this report for industry personnel. To improve the security clearance process for industry personnel as well as for military members and federal employees, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Intelligence to take the following four actions: improve the projections of clearance requirements for industrial personnel—both the number and type of clearances—by working with DOD components, industrial contractors, and the acquisition community to identify obstacles and implement steps to overcome them; work with DOD components and other agencies to eliminate unnecessary reciprocity limitations for industry personnel whose eligibility for a clearance is granted by DOD; develop and implement an integrated, comprehensive management plan to eliminate the backlog, reduce the delays in conducting investigations and determining eligibility for security clearances, and overcome the impediments that could allow such problems to recur; and analyze the feasibility of implementing initiatives designed to reduce the backlog and delays, prioritize the initiatives, and make resources available for testing and implementing the initiatives, which could include, but are not limited to, those evaluated in this report. In written comments on a draft of this report, DOD fully concurred with three of our four recommendations: improve projections of clearance requirements for industrial personnel, eliminate unnecessary reciprocity limitations, and analyze the feasibility of initiatives to reduce the backlog and delays. DOD partially concurred with our recommendation to develop and implement an integrated, comprehensive management plan. In its partial concurrence, DOD noted that it had numerous plans to improve its process and said we did not identify why a single, comprehensive plan would improve its ability to achieve success. As our report points out, there are several reasons for the recommendation. Specifically, the plans that DOD provided to us often were missing details on budgets, personnel resources, costs, milestones with specific dates for accomplishment, identification of potential obstacles, and options for overcoming the obstacles if they should occur. Also, the use of multiple smaller plans does not provide DOD with a bigger picture of how it should strategically plan and prioritize its personnel and budget resources and actions required simultaneously in two or more plans. Continued use of piecemeal planning could result in a failure to recognize problems not yet addressed or planned actions that conflict with those being implemented—or planned as part of another effort. Moreover, DOD cited its plan to transfer DSS’s investigative functions and personnel to OPM. While the plan would result in DOD eliminating its responsibility for conducting the investigations, no new investigative personnel would result, if or when the transfer occurs. Therefore, it is not apparent how the transfer will help DOD eliminate its backlog and reduce clearance delays. DOD’s failure to identify contingency actions if the transfer did not occur according to its plans already has delayed the start of nearly 70,000 investigations for industry personnel in fiscal year 2004. We continue to believe our recommendation has merit and should be implemented. Also, in commenting on our recommendations, DOD made several points that need to be addressed. DOD noted that we gave little acknowledgement to the many significant initiatives under way and no acknowledgement to policy changes implemented by DOD in past years to expedite the process. Our report highlights several steps DOD has taken. First, we acknowledged actions that DOD has taken in recent years to address the backlog—and handle the 174 percent increase from fiscal year 2001 through fiscal year 2003 in the number of clearance eligibility determinations for industry personnel, such as contracting for additional investigative services, hiring more adjudicators, and authorizing overtime for adjudicative staff. Second, we discuss in some detail five significant initiatives that DOD is considering to reduce the backlog and delays. DOD noted that its initiatives “are gradually improving the process.” This DOD statement supports our conclusion that it remains unclear whether any of the initiatives—individually or collectively—can have a direct and immediate impact on the backlog or delays. Third, we acknowledged policy changes, but many of the changes were implemented from 4 to 18 years earlier—using waivers for clearance reinstatements and conversions to eliminate the need to perform another investigation (2000), implementing national investigative standards and adjudicative guidelines (1999), utilizing full reciprocity (1997), and granting of interim clearances to put industry personnel to work (1986). These positive steps must, however, be considered in the context of major concerns that remain. These concerns include the sizeable and long-standing backlog; the length of time needed to conduct an investigation and determine eligibility for a clearance, which now takes, on average, over 1 year to complete; the failure to implement JPAS throughout DOD with all of its intended design features, even though DOD said it would be implemented in fiscal year 2001; and DOD’s declaration that its personnel security investigations program has been a systemic weakness since fiscal year 2000. We believe that our report presents a balanced representation of the improvements and the failures that contributed to a long-standing problem that can increase national security risks; affect the timeliness, quality, and costs of contractor performance on national-security-related contracts; and ultimately increase costs to the federal government. DOD’s comments are reprinted in appendix III. DOD also provided technical comments that we incorporated in the final report as appropriate. We are sending copies of this report to other interested congressional committees. We also are sending copies to the Secretary of Defense; the Director, Office of Personnel Management; and the Director, Office of Management and Budget. We will make copies available to other interested parties upon request. This report also will be available at no charge on GAO’s Web site at http://www.gao/gov. If you or your staff have any questions about this report, please contact me at (202) 512-5559 or by e-mail at [email protected] or contact Jack E. Edwards at (202) 512-8246 or by e-mail at [email protected]. Mark A. Pross, James F. Reid, William J. Rigazio, and Nancy L. Benco made key contributions to this report. In conducting our review of the security clearance process for industry personnel, we visited key offices within the Department of Defense (DOD) that have responsibility for oversight and program management and implementation. We also met with selected industrial contractors and industry associations whose employees and members are affected by the DOD backlog and delays in conducting investigations and determining eligibility for security clearances. We conducted our work in Washington, D.C., at DOD, including the Office of the Under Secretary of Defense for Intelligence (OUSD ); Defense Security Service (DSS); and the Defense Office of Hearings and Appeals (DOHA); at the Office of Personnel Management; the Information Security Oversight Office at the National Archives and Records Administration; and at the Personnel Security Working Group of the National Security Council’s Policy Coordinating Committee on Records Access and Information Security. We also conducted review work in Columbus, Ohio, at the Defense Industrial Security Clearance Office (DISCO) and DOHA; at Fort Meade, Maryland, at DSS’s Personnel Investigations Center; and in Monterey, California, at the Defense Personnel Security Research Center. We met with representatives of several industrial contractors, including Northrop-Grumman Corporation, Linthicum, Maryland, and Data Systems Analysts, Inc., and General Dynamics Advanced Information Systems in Arlington, Virginia. In addition, we held discussions with officials representing industry associations, including the Northern Virginia Technology Council and the National Classification Management Society in Washington, D.C.; via telephone with the Shipbuilders Council of America; with officials from the Information Technology Association of America, Arlington, Virginia; and with representatives from the Aerospace Industries Association and National Defense Industrial Association, Linthicum, Maryland. To determine the size of the security clearance backlog and changes during the last 3 fiscal years in the amount of time it takes to conduct an investigation and issue a clearance eligibility determination, we met with DSS and DOHA officials to obtain the relevant data from the Case Control Management System and discussed their methods for determining what constitutes a backlog. As part of the process for estimating the backlog, we observed the steps used to process investigative and adjudicative information during our visits to the DSS Personnel Investigations Center, DISCO, and DOHA. During these site visits, we obtained information on the number of days required to complete an investigation or adjudication, the time frames for designating what constitutes an investigative or adjudicative backlog, and data reliability through questionnaires and interviews. Our Applied Research and Methods team assisted us in reviewing the reliability of the databases used to determine the backlog. We also examined data for fiscal years 2001 to 2003 to track changes in how long it took industry personnel to obtain a clearance during those years. We discuss developments during the first half of fiscal year 2004, where appropriate, so that information is current as of March 31, 2004. To identify the reasons or impediments for the backlog and delays in conducting investigations and issuing eligibility determinations, we reviewed reports by GAO, DOD Office of the Inspector General, House Committee on Government Reform, Defense Personnel Security Research Center, Information Security Oversight Office, and the Joint Security Commission II. We interviewed officials from DSS, DISCO, and DOHA and observed and reviewed their procedures. We also discussed impediments with officials of OUSD (I), the Defense Personnel Security Research Center, the Information Security Oversight Office, and the Chair of the Personnel Security Working Group of the National Security Council, as well as industry representatives. In addition, we reviewed these agencies’ prior reports. Our Office of the General Counsel reviewed various public laws; executive orders; federal regulations; and DOD policy memorandums, directives, regulations, and manuals. To identify additional steps that DOD could take to reduce the time needed to conduct investigations and issue eligibility determinations, we reviewed prior reports to identify previously suggested initiatives. We supplemented this information with discussions on the status of those previously identified steps, as well as ongoing initiatives, with both industry representatives and government officials. Where appropriate, our Applied Research and Methods team reviewed Defense Personnel Security Research Center reports to help ensure that the center’s (1) approaches were methodogically sound, (2) sampling and statistical modeling techniques were sufficient, and (3) proposed empirically based procedural changes to DOD’s security clearance process also were methodologically sound. The team also reviewed industry association survey results and evaluated the validity and reliability of the survey methodology and results. We assessed the reliability of the data that were provided by DSS’s Case Control Management System and used to determine the investigative and adjudicative backlog and the time needed to conduct an investigation and determine eligibility for a security clearance by (1) reviewing existing information about the data and system that produced them, (2) interviewing agency officials knowledgeable about the data, and (3) reviewing DISCO’s and DOHA’s responses to a detailed questionnaire about their information technology data reliability. We determined that the data for fiscal years 2001 and thereafter were sufficiently reliable for the purpose of this report. The Case Control Management System also faced certain limitations, which had an impact on our findings. Although the Case Control Management System, which is used to obtain the backlog estimates, can provide the total elapsed time between opening a case and issuing the final security clearance eligibility determination, it is not capable of generating separate time estimates for the intermediate stages of the clearance process. Nor does it have the capability to identify how much time DOHA needed to adjudicate issue cases. Therefore, all of the time-based findings include the time period beginning when personnel security questionnaires were entered into the Case Control Management System and ending when DISCO notified the industrial contractor of the DISCO or DOHA adjudicators’ decisions to determine eligibility for a clearance. Thus, the total number of days to determine eligibility for a clearance includes investigative time; DISCO and possibly DOHA review time; additional DISCO investigative time, if required; and DOHA’s appeals process that may follow the denial of a clearance request or the revocation of a clearance. Finally, the Case Control Management System has the capability to monitor overdue reinvestigations and generate accurate estimates for that portion of the backlog for industry personnel; however, it does not have this capability for military members and federal employees. We conducted our review from July 2003 through May 2004 in accordance with generally accepted government auditing standards. We include a comprehensive list of related GAO products at the end of this report. The Federal Adjudicative Guidelines for Determining Eligibility for Access to Classified Information were approved by the President on March 24, 1997, and implemented by the Department of Defense in 1998. They include the following 13 guidelines and the reasons for concern. 1. Allegiance to the United States: The willingness of an individual to safeguard classified information is in doubt if there is any reason to suspect the individual’s allegiance to the United States. 2. Foreign influence: A security risk may exist when an individual is bound by affection, influence, or obligation to persons, such as family members, who are not citizens of the United States or may be subject to duress. 3. Foreign preference: When an individual acts in such a way as to indicate preference for a foreign country, such as possession and/or use of a foreign passport, then he or she may be prone to make decisions harmful to the interests of the United States. 4. Sexual behavior: Sexual behavior is a security concern if it involves a criminal offense; indicates a personality or emotional disorder; may subject the individual to undue influence of coercion, exploitation, or duress; or reflects lack of judgment or discretion. 5. Personal conduct: Conduct involving questionable judgment, untrustworthiness, unreliability, lack of candor, or unwillingness to comply with rules and regulations could indicate that an individual may not properly safeguard classified information. 6. Financial considerations: An individual who is financially overextended is at risk of having to engage in illegal acts to generate funds. Unexplained affluence is often linked to proceeds from financially profitable criminal acts. 7. Alcohol consumption: Excessive alcohol consumption often leads to the exercise of questionable judgment, unreliability, and failure to control impulses, and increases the risk of unauthorized disclosure of classified information due to carelessness. 8. Drug involvement: Improper or illegal involvement with drugs raises questions regarding an individual’s willingness or ability to protect classified information. 9. Emotional, mental, or personality disorders: Emotional, mental, or personality disorders are a security concern because they may indicate a defect in judgment, reliability, or stability. 10. Criminal conduct: A history or pattern of criminal activity creates doubt about a person’s judgment, reliability, and trustworthiness. 11. Security violations: Noncompliance with security regulations raises doubt about an individual’s trustworthiness, willingness, and ability to safeguard classified information. 12. Outside activities: Involvement in certain types of outside employment or activities is a security concern if it poses a conflict with an individual’s security responsibilities and could create an increased risk of unauthorized disclosure of classified information. 13. Misuse of information technology systems: Noncompliance with rules, procedures, guidelines, or regulations pertaining to information technology systems may raise security concerns about an individual’s trustworthiness, willingness, and ability to properly protect classified systems, networks, and information. The guidelines state that each case is to be judged on its own merits and that a final determination to grant, deny, or revoke access to classified information is the responsibility of the specific department or agency. The adjudicators are to consider available, reliable information about the person—past and present, favorable and unfavorable—in reaching an “overall common sense” clearance-eligibility determination that gives careful consideration to the 13 adjudicative guidelines. According to the guidelines, any doubt about whether a clearance for access to classified information is consistent with national security is to be resolved in favor of national security. DOD Personnel Clearances: Preliminary Observations Related to Backlogs and Delays in Determining Security Clearance Eligibility for Industry Personnel. GAO-04-202T. Washington, D.C.: May 6, 2004. Industrial Security: DOD Cannot Provide Adequate Assurances That Its Oversight Ensures the Protection of Classified Information. GAO-04-332. Washington, D.C.: March 3, 2004. DOD Personnel Clearances: DOD Needs to Overcome Impediments to Eliminating Backlog and Determining Its Size. GAO-04-344. Washington, D.C.: February 9, 2004. DOD Personnel: More Consistency Needed in Determining Eligibility for Top Secret Security Clearances. GAO-01-465. Washington, D.C.: April 18, 2001. DOD Personnel: More Accurate Estimate of Overdue Security Clearance Reinvestigation Is Needed. GAO/T-NSIAD-00-246. Washington, D.C.: September 20, 2000. DOD Personnel: More Actions Needed to Address Backlog of Security Clearance Reinvestigations. GAO/NSIAD-00-215. Washington, D.C.: August 24, 2000. DOD Personnel: Weaknesses in Security Investigation Program Are Being Addressed. GAO/T-NSIAD-00-148. Washington, D.C.: April 6, 2000. DOD Personnel: Inadequate Personnel Security Investigations Pose National Security Risks. GAO/T-NSIAD-00-65. Washington, D.C.: February 16, 2000. DOD Personnel: Inadequate Personnel Security Investigations Pose National Security Risks. GAO/NSIAD-00-12. Washington, D.C.: October 27, 1999. Background Investigations: Program Deficiencies May Lead DEA to Relinquish Its Authority to OPM. GAO/GGD-99-173. Washington, D.C.: September 7, 1999. Military Recruiting: New Initiatives Could Improve Criminal History Screening. GAO/NSIAD-99-53. Washington, D.C.: February 23, 1999. Executive Office of the President: Procedures for Acquiring Access to and Safeguarding Intelligence Information. GAO/NSIAD-98-245. Washington, D.C.: September 30, 1998. Privatization of OPM’s Investigations Service. GAO/GGD-96-97R. Washington, D.C.: August 22, 1996. Cost Analysis: Privatizing OPM Investigations. GAO/GGD-96-121R. Washington, D.C.: July 5, 1996. Personnel Security: Pass and Security Clearance Data for the Executive Office of the President. GAO/NSIAD-96-20. Washington, D.C.: October 19, 1995. Privatizing OPM Investigations: Perspectives on OPM’s Role in Background Investigations. GAO/T-GGD-95-185. Washington, D.C.: June 14, 1995. Background Investigations: Impediments to Consolidating Investigations and Adjudicative Functions. GAO/NSIAD-95-101. Washington, D.C.: March 24, 1995. Security Clearances: Consideration of Sexual Orientation in the Clearance Process. GAO/NSIAD-95-21. Washington, D.C.: March 24, 1995. Personnel Security Investigations. GAO/NSIAD-94-135R. Washington, D.C.: March 4, 1994. Nuclear Security: DOE’s Progress on Reducing Its Security Clearance Work Load. GAO/RCED-93-183. Washington, D.C.: August 12, 1993. Personnel Security: Efforts by DOD and DOE to Eliminate Duplicative Background Investigations. GAO/RCED-93-23. Washington, D.C.: May 10, 1993. Security Clearances: Due Process for Denials and Revocations by Defense, Energy, and State. GAO/NSIAD-92-99. Washington, D.C.: May 6, 1992. DOD Special Access Programs: Administrative Due Process Not Provided When Access Is Denied or Revoked. GAO/NSIAD-93-162. Washington, D.C.: May 5, 1993. Administrative Due Process: Denials and Revocations of Security Clearances and Access to Special Programs. GAO/T-NSIAD-93-14. Washington, D.C.: May 5, 1993. Due Process: Procedures for Unfavorable Suitability and Security Clearance Actions. GAO/NSIAD-90-97FS. Washington, D.C.: April 23, 1990.
As more and more federal jobs are privatized, individuals working for private industry are taking on a greater role in national security work for the Department of Defense (DOD) and other federal agencies. Because many of these jobs require access to classified information, industry personnel must hold a security clearance. As of September 30, 2003, industry workers held more than one-third of all clearances issued by DOD. Long-standing security clearance backlogs and delays in determining clearance eligibility affect industry personnel, military members, and federal employees. As requested, we reviewed the clearance eligibility process for industry personnel and (1) describe the size of the backlog and changes in the time needed to issue eligibility determinations, (2) identify reasons for the backlog and delays, and (3) evaluate initiatives that DOD could take to eliminate the backlog and decrease the delays. As of March 31, 2004, DOD's security clearance backlog for industry personnel was roughly 188,000 cases, and the time needed to conduct an investigation and determine eligibility for a clearance during the last 3 fiscal years had increased by 56 days to a total of 375 days. DOD identified three separate backlog estimates; (1) more than 61,000 reinvestigations (required for renewing clearances) that were overdue but had not been submitted, (2) over 101,000 new investigations or reinvestigations that had not been completed within DOD's established time frames, and (3) over 25,000 adjudications (a determination of clearance eligibility) that had not been completed within DOD's established time frames. From fiscal year 2001 through fiscal year 2003, the average time that it took DOD to conduct an investigation and determine clearance eligibility for industry personnel increased from 319 days to 375 days. Delays in conducting investigations and determining clearance eligibility can increase national security risks, prevent industry personnel from beginning or continuing work on classified programs and activities, hinder industrial contractors from hiring the most experienced and best qualified personnel, increase the time needed to complete national-security-related contracts, and increase costs to the federal government. Several impediments hinder DOD's ability to eliminate the backlogs and reduce the amount of time needed to conduct an investigation and determine security clearance eligibility for industry personnel. Impediments include a large number of new clearance requests; an increase in the proportion of requests for top secret clearances, which require more time to process; inaccurate workload projections for both the number and type of clearances needed for industry personnel; and insufficient investigative and adjudicative workforces to handle the large workloads. Industrial contractors cited the lack of full reciprocity (the acceptance of a clearance and access granted by another department, agency, or military service) as an obstacle that can cause industry delays in filling positions and starting work on government contracts. Also, the effects of past conditions, such as the backlog itself, have been identified as impediments to timely eligibility determinations. Furthermore, DOD does not have an integrated, comprehensive management plan for addressing the backlog and delays. DOD is considering several initiatives that might reduce security clearance backlogs and processing times for determining clearance eligibility for industry personnel. Among those initiatives that DOD is exploring are (1) conducting a phased, periodic reinvestigation; (2) establishing a single adjudicative facility for industry; (3) reevaluating investigative standards and adjudicative guidelines; and (4) implementing an automated verification process for identifying and validating industrial security clearance requirements. These initiates could, however, face implementation obstacles, such as the need to change governmentwide regulations.
Most of the military services’ active and reserve components faced recruiting difficulties during the strong economic climate of the late 1990s. As a result, the services stepped up their recruiting to ensure that they would have enough recruits to fill their ranks. Recruiting efforts focus on three initiatives. First, a “sales force” of more than 15,000 recruiters, who are mostly located in the United States, recruit from the local population. Second, these recruiters have financial and other incentives that they can use to convince young adults to consider a military career. Such incentives include enlistment bonuses and college benefits. Finally, the services use advertising to raise the public’s awareness of the military and help the sales force of recruiters reach the target recruiting population and generate potential leads for recruiters. This advertising can include television and radio commercials, Internet and printed advertisements, and special events. DOD believes that advertising is increasingly critical to its recruiting effort because convincing young adults to join the military is becoming more difficult. In 2001, over 70 percent of polled young adults said that they probably or definitely would not join the military, compared with 57 percent in 1976. The number of veterans is declining, which means that fewer young adults have influencers—a relative, coach, or teacher—who have past military experience. Compounding these difficulties, proportionally more high school graduates are attending college. Finally, the perception that service in the military is arduous—and possibly dangerous—can inhibit recruiting efforts. DOD believes that these factors together make the military an increasingly harder sell as a career choice and life-style option for young adults. The Office of the Secretary of Defense is responsible for establishing policy and providing oversight for the military recruiting and advertising programs of the active and reserve components. Within the Office of the Secretary of Defense, the Under Secretary for Personnel and Readiness is responsible for developing, reviewing, and analyzing recruiting policy, plans, and resource levels. The office provides policy oversight for advertising programs and coordinates them through the Joint Marketing and Advertising Committee. DOD’s strategic plan for military personnel human resources emphasizes the need to recruit, motivate, and retain adequate and diverse numbers of quality recruits. DOD’s recruiting and advertising programs are not centrally managed. All of the active components and some of the reserve components manage their separate advertising programs and work closely with their own contracted advertising agencies. DOD and the services believe that this decentralized approach better differentiates between the service “brands” (i.e., Army, Navy, Air Force, Marines). The Joint Advertising, Market Research, and Studies program, which is funded separately by DOD, exists to address common DOD requirements, such as conducting market research and obtaining and distributing lists of potential leads. The joint program has developed a DOD-wide advertising campaign to target the adult influencers of potential recruits, but this program had not been fully implemented at the time of our review. After most of the services experienced recruiting shortfalls in the late 1990s, DOD reviewed its advertising programs and identified opportunities for improvement. The services, except the Marine Corps, substantially revised their advertising campaigns and slogans and contracted with new advertising agencies. The services told us that their revised campaigns place them in a better position to recruit today’s young adults. Currently, almost all of the services and reserve components are achieving their recruiting goals, and advertising funding has almost doubled since fiscal year 1998. The increases in funding have not been used to buy more national media, such as television commercials. Rather, the funding increases are being directed to other types of advertising, such as special events marketing and the Internet, that are intended to better reach today’s young adults. Advertising funding for DOD increased from $299 million in fiscal year 1998 to $592 million in fiscal year 2003, an increase of 98 percent. Recruiting shortfalls in the late 1990s led to an examination and revision of DOD’s advertising programs. The Army, Navy, and Air Force missed their recruiting quantity goals, while some of the reserve components fell short of both their quantity and quality goals. Following these recruiting shortfalls, Congress asked the Secretary of Defense to review DOD’s advertising programs and make recommendations for improvements. DOD has revamped its advertising programs. The active-duty services, except for the Marine Corps, substantially revised their advertising campaigns and selected new advertising agencies as their contractors. They produced new advertising strategies and campaigns, complete with new slogans and revised television, print, and radio advertisements, along with new brand images defined by distinct logos, colors, and music. The services, in conjunction with their advertising agencies, conducted new research on young adults—their primary target market. During this period, the joint program developed an advertising campaign to target influencers of prospective recruits, as recommended in DOD’s review. In addition to their overall campaigns, all of the services have specialized campaigns to target diverse segments of the young adult population. For instance, the Navy created a Web site, called El Navy, which is designed to better communicate with the Hispanic market, and the Army has specifically tailored radio advertisements to reach the African American market. The services also incorporated a greater variety of public relations and promotional activities, such as participating in job fairs and sponsoring sports car racing teams, as an integral part of their advertising programs. As shown in table 1, there are essentially nine advertising programs that are managed separately by the military services, reserve components, and the Office of the Secretary of Defense. The active services told us that they are pleased with their new advertising campaigns and agencies, and they believe that the revised and better- funded campaigns have placed them in a more competitive position to recruit young adults. The sluggish U.S. economy has also narrowed employment options and is considered to be an important factor in easing the recruiting challenge. Today, all of the active services are meeting or exceeding their overall recruiting goals. Most of the reserve components are also achieving their recruiting goals. As of June 2003, the Army National Guard was falling short of its recruiting goals because of extensive overseas deployments and the implementation of stop loss (restrictions on leaving the military). Army National Guard officials stated that they expect to meet their goals by the end of fiscal year 2003. Some reserve officers expressed concerns about the negative impact of the recent high deployment rates on future recruiting. The services, especially the reserve components, continue to face challenges in recruiting individuals with some types of specific training or skills, such as medical, legal, and construction, and they have developed some specialized advertising campaigns targeted to recruit them. Since fiscal year 1998, the services have changed how they allocate advertising funding, according to the figures provided by DOD. Grouped into three broad categories, advertising funding includes: (1) events marketing, public affairs and public relations, Internet, and other; (2) national media; and (3) direct mail and miscellaneous recruiting support. One of the categories—events marketing, public affairs and public relations, Internet, and other—has shown the greatest increase as a percentage of the total budget, nearly tripling from around 10 percent in fiscal year 1998 to 29 percent in fiscal year 2003. This increase was used partly to create and produce new advertising campaigns and strategies. Service officials told us that event marketing and public relations activities provide recruiters with greater opportunities to interact with potential recruits and supplement their national media campaigns and other methods of advertising. One example is the Army’s sponsorship of a sports racing car. (See fig. 1.) Internet and Web-site recruiting have also increased significantly from fiscal year 1998 through fiscal year 2003. All of the active military services have increased the amount of advertising on the Internet and have used interactive Web sites to complement their traditional recruiting and advertising methods. The expenditures for the national media category, which includes paid television, radio, and magazine advertisements, have remained relatively constant. This means that this category’s proportion of the growing total advertising budgets has actually declined. Specifically, expenditures for the national media in fiscal year 1998 were more than half of the advertising budget; currently, it represents about 40 percent. Television advertising—which offers tremendous reach to target audiences— dominates this category. Television advertising has remained the single largest advertising expenditure: paid television is still about a quarter of the total advertising budget for all of the military components. DOD’s advertising funding has nearly doubled in the years since 1998 and most of these increases occurred in the earlier years. (See fig. 2.) Total advertising funding for all of the services increased 98 percent, from $299 million in fiscal year 1998 to $592 million in fiscal year 2003. The total DOD advertising budget request to Congress for fiscal year 2004 was $592.8 million. Since fiscal year 1998, DOD’s advertising funding, which is included in DOD’s operation and maintenance appropriations, has increased at a significantly higher rate than the total of all of DOD’s operation and maintenance funding. DOD officials cite media inflation as one reason for increased advertising funding. Inflation for some types of media, especially for television commercials, has been higher than general inflation. However, this is not the reason for all of the increases in advertising funding during this period because not all of the advertising funding is used for media advertising. For example, only about a quarter of advertising funds are currently spent to buy time to run television commercials. Growing advertising costs are only part of a rapidly increasing total investment in recruiting. The rising advertising and overall recruiting costs can be seen in the investment per enlisted recruit—an important bottom- line measure that shows the amount of money spent to enlist each recruit. Today, the services are spending almost three times as much on advertising per recruit than in fiscal year 1990. We examined data collected by DOD from the services, and it showed that the total advertising investment per enlisted recruit rose from approximately $640 to $1,900 between fiscal year 1990 and fiscal year 2003. As a proportion of the total recruiting investment, advertising has increased from 8 percent in fiscal year 1990 to 14 percent in fiscal year 2003. Bonuses and incentives to enlist have also increased substantially during this same period. The total recruiting investment per recruit increased almost 65 percent, from approximately $8,100 in fiscal year 1990 to $13,300 in fiscal year 2003. Very steep growth occurred between fiscal year 1998 and fiscal year 2002. This is shown in figure 3. The increases are not evenly distributed across the services’ advertising programs. (See table 2.) The Army has the largest advertising budget, and the Army active and reserve components account for nearly half (about $295 million) of the total advertising funding. The Marine Corps, at just under $50 million, has the smallest advertising budget. The Air Force has experienced the most significant increase in funding, in part owing to the creation of its first national television campaign. The Navy’s advertising funding has also increased, but this is primarily due to the addition of costs related to the Blue Angels and a program to test recruiting kiosks at public locations. DOD’s Joint Advertising, Market Research, and Studies Program is responsible for (1) providing market research and studies for recruiting and (2) developing an advertising campaign to target adult influencers, such as parents, coaches, and career counselors. Currently, the joint program is conducting market research and studies and providing other support for the services’ advertising programs, such as purchasing lists of high school students and recent graduates for use in mailing advertisements. In addition, the program is implementing a limited print advertising campaign targeting influencers in fiscal year 2003. The joint advertising campaign has not had consistent funding. Program managers told us that the current funding level is insufficient to fully implement the influencer advertising campaign they have developed. In past years, DOD cut funding for the joint advertising program because of concerns that the program office was not adequately executing its advertising budget. For fiscal year 2003, Congress provided the joint advertising program with less funding than DOD requested, and DOD subsequently reallocated part of the remaining joint advertising funding to a program that it considered a higher priority. DOD does not have adequate outcome measures to evaluate the effectiveness of its advertising as part of its overall recruiting effort. Effective program management requires the establishment of clear objectives and outcome measures to evaluate the program, and DOD has established neither. This has been a long-standing problem for DOD primarily because measuring the impact of advertising is inherently difficult, especially for a major life decision such as joining the military. Owing to the absence of established advertising objectives and outcome measures, DOD has not consistently collected and disseminated key information that would allow it to better assess advertising’s contribution to achieving recruiting goals. This information would include public awareness of military recruiting advertising and the willingness of young adults to join the military. Rather, the services report outcome measures that focus on achieving overall recruiting goals instead of isolating the specific contribution of advertising. Without adequate information and outcome measures, the Office of the Secretary of Defense cannot satisfactorily review the services’ advertising budget justifications nor can it determine the return on their advertising dollars as part of their overall recruiting investment. The Secretary of Defense is required by law to enhance the effectiveness of DOD’s recruitment programs through an aggressive program of advertising and market research targeted at prospective recruits and those who may influence them. DOD guidance requires the services, by active and reserve components, to report their resource inputs—how much they are spending on advertising. DOD guidance also requires the services to report on overall recruiting outcomes—their recruit quantity and quality. However, the guidance does not require active and reserve components to report information specifically about the advertising programs’ recruiting effectiveness. Effective program management requires the establishment of clearly defined objectives and outcome measures to evaluate programs. The Government Performance and Results Act was intended to help federal program managers enhance the effectiveness of their programs. It requires agencies to establish strategic plans for program activities that include, among other things, a mission statement covering major functions and operations, outcome-related goals and objectives, and a description of how these goals and objectives are to be achieved. GPRA shifted the focus of accountability for federal programs from inputs, such as staffing and resource levels, to outcomes. This requires agencies to measure the outcomes of their programs and to summarize the findings of program evaluations in their performance reports. The Office of Management and Budget’s guidance implementing GPRA requires agencies to establish meaningful program objectives and identify outcome measures that compare actual program results with established program objectives. DOD does not have adequate information to measure the effectiveness of its advertising as part of the overall recruiting effort. Measuring advertising’s effectiveness has been a long-standing problem, partly because it is inherently difficult to measure the impact that advertising has on recruiting. DOD has not established advertising program objectives and it lacks adequate outcome measures of the impact that advertising programs have on recruiting. Outcome measures are used to evaluate how closely a program’s achievements are aligned with program objectives, and to assess whether advertising is achieving its intended outcome. DOD currently requires the services and reserve components to report on inputs and outcomes related to overall recruiting. These measures are important in assessing DOD’s overall recruiting success; however, they do not assess advertising’s contribution to the recruiting process. In our 2000 report, we noted that the services do not know which of their recruiting initiatives—advertising, recruiters, or bonuses—work best. This prevented DOD from being able to effectively allocate its recruiting investment among the multiple recruiting resources. We recommended that DOD and the services assess the relative success of their recruiting strategies, including how the services can create the most cost-effective mix of recruiters, enlistment bonuses, college incentives, advertising, and other recruiting tools. In comments on that report, DOD stated that it intended to develop a joint-service model that would allow trade-off analyses to determine the relative cost-effectiveness of the various recruiting resources. This has not been done, and the current DOD cost performance trade-off model does not support analyses across the services’ budgets. Similarly, a 2002 Office of Management and Budget assessment, known as the Program Assessment Rating Tool, found that DOD’s recruiting program had met its goal of enlisting adequate numbers of recruits; however, since there were no measures of program efficiency, the overall rating for the recruiting program was only “moderately effective.” In its assessment, the Office of Management and Budget noted the inability of the recruiting program to assess the impact of individual resources, such as advertising and recruiters. The services continually adjust the mix of funding between advertising and other recruiting resources to accomplish their program goals. They have generally increased spending on advertising, added recruiters, and increased or added bonuses at the same time, making it impossible to determine the relative value of each of these initiatives. Other studies have reached similar conclusions. In 2000, a review of DOD’s advertising programs resulted in a recommendation that they be evaluated for program effectiveness. More recently, the National Academy of Sciences also cited the need to evaluate advertising’s direct influence on actual enlistments. The academy is now doing additional work on evaluating DOD’s advertising and recruiting. The lack of adequate information is partly attributable to the inherent difficulty in measuring advertising’s affect on recruiting. Measuring advertising’s effectiveness is a challenge for all businesses, according to advertising experts. Private-sector organizations cannot attribute increases in sales directly to advertising because there are many other factors influencing the sale of products, such as quality, price, and the availability of similar products. Many factors impact recruiting as well, such as employment and educational opportunities, making it especially difficult to isolate and measure the effectiveness of advertising. Enlisting in a military service is a profound life decision. Typically, an enlistment is at least a 4-year commitment and can be the start of a long military career. Another complicating factor in measuring advertising’s effectiveness is that it consists of different types and is employed differently throughout the recruiting process to attract and enlist potential recruits. Figure 4 displays the recruiting process and demonstrates the role of advertising while a young adult may be considering enlisting in the military. As the figure shows, the use of multiple types of advertising at various stages in the recruiting continuum makes it difficult to assess the effectiveness of specific types of advertising. A single recruit may be exposed to some or all of these advertising types. Traditional advertising in the national media, such as television and magazines, is intended to disseminate information designed to influence consumer activity in the marketplace. The services typically use such national media to make young people aware of a military service, the career options available in a service, and other opportunities the services have to offer them. Direct mail, special events, and the services’ Web sites are utilized to provide more detailed information about the services and the opportunities available for persons who enlist. These marketing resources give people the opportunity to let a recruiter know they are possibly interested in enlisting in a service. Another contributing factor to the absence of advertising objectives and outcome measures is the lack of DOD-wide guidance. Officials from the Office of the Secretary of Defense view their role as overseeing the decentralized programs managed by the individual services and reserve components. They scrutinize the quality and quantity of recruits and gather data about the uses of advertising funds. However, they told us they were reluctant to be more prescriptive because of a concern about appearing to micromanage the successful recruiting programs of the active and reserve components. On the basis of our work, their sensitivity is warranted. The active and reserve components tend to guard their independence, seeking to maintain their “brand” and arguing that the current decentralized structure allows them to be more responsive to their individual needs. The Office of the Secretary of Defense seeks to coordinate the active and reserve components’ activities through joint committees and to centralize research that can be utilized by all. Defining exactly what to measure may be difficult, but it is not impossible. DOD and the services, as well as their contracted advertising agencies, generally agree that there are at least two key advertising outcomes that should be measured: (1) the awareness of recruiting advertising and (2) the willingness or “propensity” to consider joining the military. However, this is not clearly stated in any program guidance. Current DOD guidance requires only that the services provide information on funding for advertising, the quality and quantity of recruits, and the allocation of resources to the various advertising categories. Although this information is valuable—in fact, critical—it is not sufficient to evaluate and isolate the effectiveness of the services’ advertising programs. DOD’s efforts thus far to measure the awareness of recruiting advertising and willingness to join the military have met with problems. Inconsistent funding for the Joint Advertising, Market Research, and Studies program has hampered consistent collection of this information. DOD has sponsored an advertising tracking study designed to monitor the awareness of individual service campaigns since 2001. However, officials from the Army, Navy, and Marine Corps told us that they do not regularly use the research provided by this study. According to program officials, there were numerous problems with the advertising tracking study. DOD is implementing changes to the study that are intended to improve its usefulness to all of the active and reserve components. In the absence of reliable and timely advertising tracking, the Army implemented its own tracking study, and the Air Force is currently planning an experimental study to assess the effectiveness of its national television advertising campaign, according to program managers. To monitor the willingness to join the military, DOD sponsors youth and adult polls, which are designed to track changes in attitudes and young adults’ aspirations. These polls replaced the Youth Attitude Tracking Survey, which had been in place for a number of years and provided long-term trend data about the propensity of young adults to consider the military. The services expressed concern that the current polls ask questions that are significantly different from those asked in the prior survey, which makes the analysis of trends difficult. DOD officials also pointed to research indicating that advertising is a cost- effective recruiting investment when compared with other recruiting initiatives. For example, a report that was done for DOD found that it was less expensive to enlist a recruit through increased investments in advertising than through increased investments in military pay for new recruits in the Army and Navy. Similarly, a study for DOD analyzed the marginal cost of different recruiting initiatives and concluded that, under certain conditions, it was more cost-effective to invest additional funds in advertising than in military pay for recruits or recruiters. DOD officials told us that these reports, which used data from the 1980s and early 1990s, provide the best research available on the topic. However, the situation has changed dramatically in recent years. DOD has altered its advertising and recruiting strategies and is spending much more on advertising. Advertising itself is also changing and is more fragmented with an expanding array of television channels and other media. Finally, media inflation, which has increased faster than general inflation even in the sluggish economy, has lessened buying power. Funding devoted to advertising has increased considerably since fiscal year 1998. Although the military services are now generally meeting their overall recruiting goals, the question of whether the significant increases in advertising budgets were a main contributor to the services’ recruiting successes remains open. During the same period, DOD also greatly increased funding for bonuses and other incentives to enlist recruits. At the same time, the U.S. economy slowed dramatically, narrowing the other employment options available to young people. These factors make it difficult to disentangle the effects of the internal DOD investments made in recruiting from the changes in the external recruiting environment. Even though the effect of advertising is inherently difficult to measure, this issue needs to be addressed. This is crucial because DOD is now spending nearly $592 million annually on recruiting advertising, or about $1,900 per enlisted recruit. In addition, the total funding for all of DOD’s recruiting efforts is now almost $4 billion. DOD needs better advertising outcome measures to allow it to oversee and manage the advertising investment as part of its overall recruiting effort. DOD and the services have an understandable focus on the most important program outcome—to ensure that the military has enough quality recruits to fill its ranks. Judged by this short-term measure, the recruiting programs are successful. But now that DOD is meeting its recruiting goals, should it reduce advertising funding or continue at its current funding levels? DOD believes that continued investments in advertising are critical to keeping awareness up in the young adult population and combating the declining propensity among today’s young adults to join the military. However, DOD has neither stated these goals clearly in its guidance, nor consistently gathered information to ensure that these objectives are being met. Now that it is meeting its recruiting goals, DOD needs to turn its attention to program effectiveness and efficiency to ensure that the active and reserve components are getting the best return on their recruiting and advertising investments. To improve DOD’s ability to adequately measure the impact of its advertising programs on its recruiting mission, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Personnel and Readiness to issue guidance that would (1) set clear, measurable objectives for DOD’s advertising programs; (2) develop outcome measures for each of DOD’s advertising programs that clearly link advertising program performance with these objectives; and (3) use these outcome measures to monitor the advertising programs’ performance and make fact-based choices about advertising funding as part of the overall recruiting investment in the future. DOD concurred with all of our recommendations. In commenting on this report, DOD stated that the Office of the Under Secretary of Defense for Personnel and Readiness, in concert with the services, will develop an advertising strategic framework to provide overall direction for DOD’s advertising programs. The framework, with associated outcome measures, would allow the office to monitor advertising results regularly and make fact-based decisions at a strategic level. It would provide an overarching structure within which each service would develop its own advertising program strategy, program objectives, and outcome measures. The framework would also direct the activities of the DOD joint program to ensure support to the services. DOD also commented that current research has not advanced to the point where models exist that adequately account for the many factors that affect recruiting as well as for the differences in the services. DOD stated that it will address this research gap through several initiatives intended to advance the measurement of the performance of recruiting and advertising. The National Academy of Sciences is currently developing an evaluation framework for recruiting and advertising and expects to publish a report in early 2004. DOD’s comments are provided in their entirety in appendix II. DOD officials also provided technical comments that we have incorporated as appropriate. We are sending copies of this report to interested congressional committees; the Secretaries of Defense, the Army, the Navy, and the Air Force; and the Commandant of the Marine Corps. We will send copies to other interested parties upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me at (202) 512-5559 if you or your staffs have any questions regarding this report. Key contributors to this report were John Pendleton, Lori Atkinson, Nancy Benco, Kurt Burgeson, Alan Byroade, Chris Currie, LaTonya Gist, Jim McGaughey, Charles Perdue, Barry Shillito, and John Smale. To describe the changes in the Department of Defense’s (DOD) advertising programs and advertising funding trends since the late 1990s, we reviewed advertising exhibits in the operation and maintenance congressional justification books as well as budget information provided by the Office of the Secretary of Defense. Since our objective was to look at broad funding trends, we did not reconcile these requested amounts with actual obligations or expenditures by the active and reserve components. We interviewed active and reserve component officials to understand program changes since the late 1990s. We obtained recruiting mission goals and actual accessions back to fiscal year 1990 from the Office of the Secretary of Defense and the services. We obtained information on the quality of accessions of each of the active and reserve components back to fiscal year 1990, as well as the investment per active enlisted accession back to fiscal year 1990. We reviewed information from the Defense Human Resources Activity and the Joint Marketing and Advertising Committee for discussions of advertising programs. The services provided additional information regarding the types of advertising media they use. To assess the adequacy of the measures used by DOD to evaluate the effectiveness of advertising, we reviewed information on outcome measures used to evaluate the effectiveness of advertising provided by each of the active and reserve components; the advertising agencies that are their contractors; and the DOD Joint Advertising, Market Research, and Studies program. We spoke with the advertising contractors to learn what measures of effectiveness they are aware of and use. We also reviewed the requirements for establishing program objectives and outcome measures in the Government Performance and Results Act and in Office of Management and Budget guidance. We interviewed DOD and advertising officials from each of the active and reserve components, as well as representatives from the services’ advertising agencies. We also reviewed their programs, procedures, and oversight activities. These interviews were conducted with officials in the Office of the Under Secretary of Defense for Personnel and Readiness; Office of the Under Secretary of Defense (Comptroller/Chief Financial Officer); Defense Human Resources Activity, Joint Advertising, Market Research, and Studies Office; Army Accessions Command, Fort Knox, Kentucky; Air Force Recruiting Service, Randolph Air Force Base, Texas; Navy Recruiting Command, Millington, Tennessee; Marine Corps Recruiting Command, Quantico Marine Corps Base, Virginia; Army National Guard Recruiting and Retention Command, Arlington, Virginia; Naval Reserve Command, New Orleans, Louisiana; Air Force Reserve Command, Robins Air Force Base, Georgia; and the Air National Guard Office of Recruiting and Retention, Arlington, Virginia. We also interviewed officials at the contracted advertising agencies for the joint program, the Army, the Navy, the Marine Corps, and the Air Force. We reviewed reports on recruiting and advertising from DOD, the Congressional Research Service, the private sector, and others. We obtained recruiting advertising budget and funding data for types of advertising from the Office of the Secretary of Defense. We reviewed, but did not verify, the accuracy of the data provided by DOD. We conducted our review from October 2002 through July 2003 in accordance with generally accepted government auditing standards. Program Evaluation: Strategies for Assessing How Information Dissemination Contributes to Agency Goals. GAO-02-923. Washington, D.C.: September 30, 2002. Military Personnel: Services Need to Assess Efforts to Meet Recruiting Goals and Cut Attrition. GAO/NSIAD-00-146. Washington, D.C.: June 23, 2000. Military Personnel: First-Term Recruiting and Attrition Continue to Require Focused Attention. GAO/T-NSIAD-00-102. Washington, D.C.: February 24, 2000. Military Recruiting: DOD Could Improve Its Recruiter Selection and Incentive Systems. GAO/NSIAD-98-58. Washington, D.C.: January 30, 1998. Military Personnel: High Aggregate Personnel Levels Maintained Throughout Drawdown. GAO/NSIAD-95-97. Washington, D.C.: June 2, 1995. Military Recruiting: More Innovative Approaches Needed. GAO/NSIAD- 95-22. Washington, D.C.: December 22, 1994. Military Downsizing: Balancing Accessions and Losses Is Key to Shaping the Future Force. GAO/NSIAD-93-241. Washington, D.C.: September 30, 1993.
The Department of Defense (DOD) must convince more than 200,000 people each year to join the military. To assist in recruiting, the military services advertise on television, on radio, and in print and participate in other promotional activities. In the late 1990s, some of the services missed their overall recruiting goals. In response, DOD added recruiting resources by increasing its advertising, number of recruiters, and financial incentives. By fiscal year 2003, DOD's total recruiting budget was approaching $4 billion annually. At the request of Congress, GAO determined the changes in DOD's advertising programs and funding trends since the late 1990s and assessed the adequacy of measures used by DOD to evaluate the effectiveness of its advertising. GAO recommends that DOD set clear, measurable advertising Since the late 1990s, DOD has revamped its recruiting advertising programs and nearly doubled the funding for recruiting advertising. The military services have revised many of their advertising campaigns and focused on complementing traditional advertising, such as by increasing the use of the Internet, and participating in more promotional activities, such as sports car racing events. DOD's total advertising funding increased 98 percent in constant dollars from fiscal year 1998 through fiscal year 2003--from $299 million to $592 million. The advertising cost per enlisted recruit has nearly tripled and is now almost $1,900. The military services agree that the revised strategies and increased investments have energized their advertising campaigns and better positioned them to recruit in an increasingly competitive marketplace. Today, almost all of the active and reserve components are meeting their overall recruiting goals in terms of the quality and quantity of new recruits. DOD does not have clear program objectives and adequate outcome measures to evaluate the effectiveness of its advertising as part of its overall recruiting effort. Thus, DOD cannot show that its increased advertising efforts have been a key reason for its overall recruiting success. Isolating the impact of advertising on recruiting efforts is inherently difficult because joining the military is a profound life decision. Moreover, DOD has not consistently tracked key information, such as public awareness of military recruiting advertising and the willingness of young adults to join the military. Such data could be used to help evaluate the effectiveness of advertising. Without sufficient information on advertising's effectiveness, DOD cannot determine the return on its advertising funding or make fact-based choices on how its overall recruiting investments should be allocated.
Contractors have a long-standing and essential role in administering the Medicare program, including conducting program integrity activities, such as postpayment claims reviews, which are integral to protecting the Medicare program from improper payments or fraud. The four types of contractors we examined conducted about 1.4 million claims reviews that involved examining documentation sent in by providers in 2012, which represented less than one percent of all FFS claims in that year. MACs conduct postpayment claims reviews on a small percentage of paid claims to determine if the payments were proper based on the underlying documentation. MACs use the findings from postpayment claims reviews to help prevent future payment errors, for example, by reviewing claims received from specific providers or for specific services with a history of improper payments to determine whether additional action is needed to As of February 2014, prevent similar improper payments in the future.12 A/B MACs processed Medicare Part A and Part B claims from providers in each of 12 jurisdictions nationwide, and 4 DME MACs processed DME claims from providers in each of 4 jurisdictions nationwide. In 2012, A/B and DME MACs conducted 84,070 postpayment claims reviews, or 6 percent of about 1.4 million total postpayment claims reviews conducted that year. The mission of the ZPICs is to identify and investigate potentially fraudulent FFS claims and providers in each of seven geographic jurisdictions, which are called zones. They use several methods to investigate potentially fraudulent claims and providers, including postpayment claims reviews. In 2012, ZPICs conducted 107,621 postpayment claims reviews, or 8 percent of the total postpayment claims reviews that year. The CERT contractor conducts postpayment claims reviews on a nationwide random sample of claims, which are used to annually estimate the national Medicare FFS improper payment rate. This helps CMS comply with legal requirements for improper payment reporting.reviews are used to estimate the national Medicare improper payment rate, and to estimate the improper payment rate for each MAC and by type of service and provider. In 2012, the CERT contractor conducted 41,396 postpayment claims reviews used to estimate the improper payment rate, or 3 percent of the total postpayment claims reviews conducted that year. The mission of the RAs is to conduct postpayment claims reviews to identify improper payments not previously identified through MAC claims processing or other contractors’ reviews. Following a demonstration of recovery auditing required by the Medicare Prescription Drug, Improvement, and Modernization Act of 2003, the Tax Relief and Health Care Act of 2006 established the National RA program. Use of RAs expands the capacity for claims reviews without placing additional demands on CMS’s budget, because the RAs are paid from funds recovered rather than appropriated funds. As a result of lessons learned during the RA demonstration project and to establish tighter controls on RAs, CMS imposed certain postpayment requirements unique to the RAs when it implemented the national program that it has not imposed on the other contractors. For example, prior to widespread use, RAs must submit to CMS for review and approval descriptions of the types of claims that they propose to review. CMS expects the RAs to select only those claims with the highest risk of improper payments. RAs must also submit the basis for assessing whether the claims for those services are proper. CMS established national RA operations in 2009 with one RA in each of four regions that together cover the United States. Federal law requires CMS to pay RAs on a contingency basis from Medicare overpayments recouped. However, if an RA’s overpayment determination is overturned on appeal, the RA is not paid for that claim. In contrast, MACs, ZPICs, and the CERT contractor are paid on the basis of the costs for the tasks performed. CMS reported that overpayments collected from the RAs increased from about $75 million in fiscal year 2010 to about $2.29 billion in fiscal year 2012. In 2012, the RAs conducted over 1.1 million postpayment claims reviews, or 83 percent of the total postpayment claims reviews that year. CMS provides guidance to its contractors on how they should analyze data to select claims for review. Within that guidance, contractors select the specific claims to review. Each of the four types of contractors selects claims for postpayment claims review using somewhat different bases for selection. (See table 1.) The potential for duplicative reviews exists because claims may be selected by more than one contractor. CMS officials told us that, in some cases, duplication is appropriate. For example, CMS officials told us that the CERT contractor may review a claim that has already been reviewed by another contractor because it must select a random sample of claims to estimate the Medicare improper payment rate. Once a contractor selects a claim for review, the contractor notifies the provider that a particular claim is under postpayment review and requests documentation from the medical record to substantiate the claim. When the contractor receives the documentation, a trained clinician or coder evaluates the documentation in light of all applicable Medicare coverage policy and coding guidelines to determine whether the payment for the services or items claimed was proper. If a MAC or another contractor determines that an overpayment was made, the MAC will seek repayment and send the provider what is referred to as a demand letter. In the event of an underpayment, the MAC will return the balance in a future remittance. Providers may appeal the contractors’ determinations. CMS developed the Recovery Audit Data Warehouse to track RA review activities and to prevent RAs from duplicating other contractors’ claims reviews. Since most of the postpayment claims reviews were conducted by RAs, RA review would be most likely to cause any potential duplications. To prevent RAs from duplicating reviews, the MACs, ZPICs, the CERT contractor, and other entities can enter the claims they have reviewed into the Recovery Audit Data Warehouse, and the database stores them as excluded claims (or exclusions). Exclusions are permanent, meaning that excluded claims are not supposed to ever be available for review by the RAs. In addition, the ZPICs and law enforcement entities, such as the HHS Office of Inspector General (OIG), can upload claims into the Recovery Audit Data Warehouse that they may, but not necessarily will, select for postpayment review as part of a fraud investigation and the database stores them as suppressions.While a claim is suppressed, it is unavailable for RA review. When a ZPIC or law enforcement agency concludes its investigation, the suppressions are required to be lifted. CMS then requires formerly suppressed claims for which medical records were requested to be excluded and thus become ineligible for RA review; all other formerly suppressed claims are to be released for possible future postpayment review. CMS requires ZPICs and law enforcement agencies to renew their suppressions every 12 months. If not renewed, the Recovery Audit Data Warehouse is to automatically release the suppressed claims. Before an RA begins postpayment claims reviews, it enters the claims it is considering for review into the Recovery Audit Data Warehouse. The database then checks to see if any of the claims the RA entered match the excluded or suppressed claims already stored in the database. If there is a match, the claim is not available for the RA to review and the Recovery Audit Data Warehouse will not allow the RA to enter any additional information about the claim. Although the other postpayment review contractors also are able to check the Recovery Audit Data Warehouse to see if the claims they are considering for review have already been reviewed by other contractors, the database is intended primarily to prevent RAs from conducting duplicative reviews. According to CMS officials, the amount of duplicative claims reviews among the four types of contractors is likely to be very small. Internal controls can help ensure that contractors are conducting postpayment claims reviews efficiently and effectively. Internal controls are the plans, methods, and procedures used to meet an organization’s mission, goals, and objectives, and help provide reasonable assurance that an organization achieves effective and efficient operations. For example, monitoring helps agencies ensure that contractor activities follow agency requirements. CMS requirements for contractors performing postpayment claims reviews and the manner in which the agency delegates authority and responsibility through these requirements help establish the control environment and control activities. Contractor requirements also establish the mechanisms that contractors use to communicate and interact with providers. Ineffective or inefficient requirements for claims reviews or insufficient monitoring and oversight create the risk of generating false findings of improper payments and an unnecessary administrative and financial burden for Medicare- participating providers and the Medicare program. The process of postpayment claims review requires contractors to interact and communicate with Medicare providers that directly provide medical services to beneficiaries. Executive Order 13571—Streamlining Service Delivery and Improving Customer Services—was issued in April 2011 to improve government services to individuals and private entities by requiring agencies to develop customer service plans in consultation with OMB. OMB issued implementing guidance for agencies for those services that the agencies plan to focus on improving.calls on agencies to improve customers’ experiences by a number of activities, including developing a process for ensuring consistency across the agency’s interactions with customers and coordinating with other agencies serving the same customers, as well as identifying opportunities to use common materials and processes. Collaboration is important when multiple contractors that conduct similar activities are overseen by different CMS units. Previous GAO work has identified practices that can help federal agencies collaborate effectively when they work together to achieve goals. This work highlighted, for example, the importance of agreeing on roles and responsibilities; establishing compatible policies, procedures, and other means to operate across organizational boundaries; and establishing mutually reinforcing or joint strategies to help align activities, processes, and resources to achieve a common outcome. These collaboration practices can also be useful when multiple offices within an agency—or an agency’s contractors—work together toward a common purpose. CMS lacks reliable data to estimate the number of duplicative claims reviews that are conducted. The Recovery Audit Data Warehouse was not designed to estimate the number of duplicative reviews among all four types of contractors, and not all contractors have been entering information consistently into the database. CMS has not monitored contractors’ data entry into the Recovery Audit Data Warehouse to ensure that it is complete and correct. CMS also has not issued complete guidance for MACs and ZPICs on whether it is appropriate for them to conduct duplicative reviews. CMS does not have reliable data to estimate the total number of duplicative claims reviews by all four types of contractors. In part, this is because CMS did not design the Recovery Audit Data Warehouse to estimate the total number of duplicative reviews. The RAs performed more than 80 percent of the claims reviews in 2012 and the Recovery Audit Data Warehouse was designed to track RA claims review activities and to prevent RAs from duplicating other contractors’ claims reviews; it was not designed to track and prevent duplicate claims reviews by the other three contractor types. For example, the Recovery Audit Data Warehouse does not show whether contractors other than RAs, such as a MAC and a ZPIC, duplicated each others’ claims reviews. Therefore, the Recovery Audit Data Warehouse data are not sufficient and reliable for accurately estimating the number of duplicative reviews by all four types of contractors. Another reason the Recovery Audit Data Warehouse cannot be used to estimate the amount of duplication is that not all of the four types of contractors consistently enter data into the database. For example, in response to our analysis that showed anomalies in the distribution of apparently duplicated claims, CMS officials told us that some MACs have been entering data from appeals reviews into the Recovery Audit Data Warehouse as exclusions.Data Warehouse data were used to estimate duplication, claims reviews for which MACs entered appeals of claims as exclusions would appear to be duplicative reviews. The officials noted that if Recovery Audit Similarly, we found that, in 2012, more than half of the ZPICs did not enter claims they reviewed into the Recovery Audit Data Warehouse as exclusions, which makes the database less effective in preventing the RAs from duplicating other contractors’ claims reviews. CMS provided data to us that showed that five of the six ZPICs had not entered any claims into the Recovery Audit Data Warehouse as exclusions in 2012, although these ZPICs had performed postpayment claims reviews. officials told us they do not monitor contractors’ entry of exclusions and suppressions to ensure this information is accurate or complete, although they recognized, before we examined the Recovery Audit Data Warehouse exclusion data with CMS, that some ZPICs may not enter claims they review as exclusions. These officials stated that if ZPICs did not exclude these claims, they would be available for an RA to review, which could lead to inappropriate duplication. CMS officials told us that they had held meetings with all of the ZPICs to educate them about the available options in the Recovery Audit Data Warehouse that could augment their antifraud activities. Although there are seven ZPIC zones, only six ZPICs are operational and four program safeguard contractors conduct reviews in one zone. RA of an ongoing investigation either by suppressing affected claims in the Recovery Audit Data Warehouse or through any other methods of coordination. Checking the accuracy of data is part of a strong internal control environment and provides an agency with assurance that the data needed for operations are reliable and complete. CMS has issued guidance for RAs and the CERT contractor about whether they may conduct duplicative claims reviews. CMS’s Medicare Program Integrity Manual states that RAs are prohibited from reviewing claims that have been reviewed by other contractors.manual for the CERT contractor states that it should select and review a random sample of claims regardless of whether they have been reviewed by other contractors, in order to establish the Medicare improper payment rate accurately. In contrast, CMS’s However, CMS has not developed complete guidance for MACs and ZPICs about whether they are permitted to duplicate other contractors’ claims reviews. Although a CMS official told us that MACs are not permitted to conduct duplicative reviews and are required to check the Recovery Audit Data Warehouse to prevent duplication, CMS guidance states only that MACs are not permitted to duplicate the ZPICs’ claims reviews and does not address whether MACs are permitted to duplicate RA claims reviews. The guidance also does not address whether MACs are expected to check the Recovery Audit Data Warehouse to prevent duplication. Furthermore, representatives from two of the three MACs we spoke with believed that CMS permitted them to duplicate some contractors’ reviews. A CMS official stated that clear guidance could be helpful for contractors. In the absence of complete guidance, officials from CMS and representatives from a ZPIC and MAC differed in their understanding of whether ZPICs could conduct duplicative reviews. CMS officials, including those who oversee ZPICs, provided conflicting information about whether CMS permits ZPICs to conduct duplicative reviews, and CMS officials were unable to provide guidance to clarify whether duplication is allowed. Representatives from a ZPIC and some CMS program integrity officials told us that CMS permits ZPICs to conduct duplicative claims reviews because ZPICs must be able to review any claim they deem necessary to investigate potential fraud. However, CMS program integrity officials told us that ZPICs may not duplicate reviews conducted by RAs or MACs because overpayment for an improperly paid claim cannot be collected twice. Written guidance stating explicitly which contractors may conduct duplicative claims reviews and when the different contractor types should check the Recovery Audit Data Warehouse to avoid duplication is important to prevent inappropriate duplication among the contractors and to minimize confusion among CMS staff, CMS contractors, and stakeholders, such as providers, about what is permitted. It is also consistent with federal internal control standards, agencies to establish control activities that enforce management’s directives. Without complete guidance for all postpayment claims review contractors about when duplicative reviews are permitted, CMS does not have assurance that MACs and ZPICs understand when and how to avoid duplicative reviews. Absence of such guidance can also leave providers confused about whether a duplicative review is appropriate. See GAO/AIMD-00-21.3.1 and GAO-01-1008G, sections related to control activities. Several factors may reduce the efficiency and effectiveness of contractors’ correspondence with providers. First, CMS’s requirements differ across contractors for the content of two types of correspondence contractors often send to providers during postpayment review, and therefore contractors do not have to convey the same type of information to providers. Second, for the correspondence we reviewed, we found that contractors did not comply consistently with all applicable requirements. Third, the extent of CMS’s oversight of contractor correspondence differed across contractor types. CMS requires contractors to include certain content in the correspondence they send to providers, but the requirements sometimes differ. All four types of contractors send providers an ADR if a provider’s claim is selected for postpayment review. Upon completing their review, MACs, ZPICs, and RAs notify providers of their findings in correspondence we refer to as results letters. The CERT contractor does not send results letters.correspondence include the reason the claim was selected for review, the information the provider must submit, the contractor’s findings, and steps providers may take in response to those findings. CMS’s requirements for these types of Some CMS requirements for correspondence are similar across all contractor types. For example, CMS requires that ADRs for all contractors specify the number of days the provider has to submit documentation in response to a contractor’s request. Similarly, CMS requires that all contractors’ results letters regarding an overpayment describe the issues leading to the overpayment as well as any recommended corrective actions the provider can take to avoid similar billing errors in the future. However, other CMS requirements for correspondence differ by contractor type. For example, ADRs from MACs, RAs, and the CERT contractor, but not those from ZPICs, must give providers the option of submitting documentation via paper, fax, CD/DVD, or electronically. Similarly, results letters from MACs and ZPICs regarding claims that were overpaid are required to include the overpayment amount for each claim, but CMS officials told us RAs are not required to include these amounts.MACs’ and ZPICs’ results letters are required to include the signature of a person to contact with inquiries about the correspondence, whereas RAs’ results letters are not required to contain this information. (Table 2 shows examples of CMS’s requirements for results letters.) In addition, inconsistencies in CMS’s guidance made it difficult to identify some of the requirements and their applicability. CMS conveys requirements through statements of work contained in the contracts, and in manuals that provide additional guidance on what contractors must do and that may be specifically referenced in a statement of work. For ADRs and results letters, we identified requirements in the RA statement of work (which applies to RAs), the Medicare Program Integrity Manual, the Medicare Contractor Beneficiary and Provider Communications Manual (which applies to MACs and ZPICs), and the CERT Manual and CERT statement of work (which apply to the CERT contractor). These documents sometimes contained differing guidance about the information the correspondence from different contractors must include. For example, the RA statement of work in effect during our review specifically required RA results letters to explain the procedures for recovering any overpayments and providers’ rights to appeal, but the Medicare Program Integrity Manual did not include these requirements for RAs. CMS officials told us that the RA statement of work was the primary guiding document for RA requirements; however, they also told us that they were not requiring the RAs to include some of the content requirements listed for results letters in the RA statement of work. As another example, the Medicare Program Integrity Manual contains some differing guidance to the MACs about their ADRs. While chapter 3 of the Medicare Program Integrity Manual instructs MACs to notify providers in ADRs that they have 45 days to respond to the request for documentation, this manual also includes a sample ADR that MACs may use for postpayment review that includes language informing providers that they have 30 days to respond to the documentation request. In addition, the Medicare Program Integrity Manual’s list of results letter requirements includes one statement indicating that the MACs and ZPICs must include appeals information in their results letters, as well as a different statement right next to it indicating that only MACs must do so. Without consistent and specific requirements for the content across contractor types, CMS does not have assurance that, consistent with federal internal control standards, providers receive similar and sufficient information during claims reviews to understand their responsibilities in responding or their rights if their claims are denied. Establishing consistent processes to communicate with providers is also aligned with OMB guidance to agencies to streamline service delivery and improve customer service, which can increase administrative efficiency. Further, inconsistencies in CMS’s requirements in contractors’ statements of work and the Medicare Program Integrity Manual could make it difficult for contractors to easily identify the most current set of requirements that apply to contractor correspondence. CMS officials told us in October 2013 that the agency has begun to explore making requirements for the content of ADRs more consistent across contractor types, such as by standardizing the introduction for the letters used by each contractor. Compliance with CMS requirements was not consistent across contractor types for the correspondence we reviewed. Our examination of 67 ADRs found that, on average, contractor ADRs overall complied with 94 percent of their applicable CMS requirements, but the compliance rate varied by contractor type. RAs had the highest compliance rate (100 percent) and the CERT contractor had the lowest rate (86 percent) (see fig. 1). Unlike the ADRs from the other three contractor types, the ADRs that the CERT contractor sends to providers are uniform and based on form letters written by CMS. (See app. I for a list of the requirements we analyzed for each type of contractor and each type of correspondence.) While all four types of contractors met most or all of their ADR requirements, compliance sometimes varied by requirement. For example, though representatives of several provider associations have reported that providers do not understand the reason their claims were selected for review, all the MAC and RA ADRs we reviewed complied with applicable requirements to identify the basis for the claim’s selection. All of the contractors’ ADRs that were required to include instructions for how to submit documentation to support the claim also complied. However, not all contractor ADRs complied with a requirement that had the potential to affect the timeliness of providers’ responses to the ADRs: about 50 percent of the MAC ADRs, 30 percent of the ZPIC ADRs, and 100 percent of the CERT contractor ADRs gave providers fewer than the required number of days to submit documentation. Similar to ADRs, the number of requirements for results letters differed by contractor type. On average, 18 requirements were applicable to each MAC results letter, 14 for each ZPIC results letter, and 8 for each RA results letter. with the requirement to document in the letter a reason for conducting the review or the rationale for good cause for having reopened the claims; instead, the letters directed the provider to the contractor’s website or to the ADR sent by the contractor previously. Also, 40 percent of the ZPICs’ results letters complied with the requirement to cite a reason for noncoverage or incorrect coding for each claim. Contractors’ inconsistent compliance with CMS’s correspondence requirements may lead to provider confusion and increased administrative burden, and is not consistent with federal internal control standards to have control activities to ensure that management’s directives are carried out and to monitor the performance of agency activities. For example, several provider associations indicated that it was burdensome to pull together complete documentation quickly. Therefore, giving providers response times that are shorter than required in ADRs can add to providers’ burden. In addition, it can lead to less efficient claims reviews—and potentially unnecessary claims denials—if providers do not submit complete information (or respond) within the shorter time frame. When providers are not notified of their rights in results letters as required, they may have more difficulty exercising their rights within required time frames, which could have financial consequences for them. The extent of CMS oversight of the content of contractors’ postpayment review-related correspondence differs by type of contractor. For MACs, CMS staff may review correspondence with providers during their annual evaluations of each MAC’s performance. CMS staff indicated that they do not review ZPIC postpayment claims review correspondence. An independent RA validation contractor that evaluates RAs’ claims reviews also assesses each RA’s correspondence for clarity and accuracy by reviewing results letters associated with reviews included in a random sample of up to 100 claims per RA per month, and CMS officials noted that they review a sample of the RA correspondence during quarterly RA performance assessments. According to CMS officials, the CERT contractor’s ADRs are uniform and based on form letters written by CMS. CMS officials stated that they did not believe they needed to monitor the content of these ADRs since most of the text was a standard template written by CMS. Our findings that contractors did not comply consistently with CMS’s requirements for the correspondence we reviewed indicate that CMS’s monitoring efforts in this area are not adequate to meet federal internal control standards to monitor contractors’ activities. Without adequate monitoring of contractors’ compliance with correspondence content, CMS’s internal control is weakened and the agency does not have assurance that the correspondence is accurate and includes all of the content required. CMS requires quality assurance processes for each of the contractor types to help ensure the accuracy of their claims review decisions about whether claims were paid properly, but the processes differ by contractor type. These processes can be internal, external, or both. CMS requires the four contractor types we reviewed to have some type of internal quality assurance process to verify the accuracy of their claims review decisions about whether claims were paid properly. In addition, for the MACs, ZPICs, and RAs, CMS has implemented external validation reviews in which staff from CMS or an independent contractor review a selection of those contractors’ claims reviews. CMS requires the four contractor types to establish an internal quality assurance process for verifying the accuracy of their claims review decisions about whether the claim was proper to pay because the service was medically necessary and billed properly according to Medicare coverage and billing rules. In addition, CMS specifically requires the MACs, ZPICs, and CERT contractor to conduct interrater reliability (IRR) assessments—assessments that compare multiple decisions by their staff reviewers about the same claim to determine the extent of their agreement about whether the claim was paid properly or not—as part of their overall quality assurance efforts. CMS officials told us that for the new RA contracts the agency expects to award in 2014, CMS will also require RAs to conduct IRR assessments as part of their efforts. Contractors have discretion in how they conduct their IRR assessments, according to CMS officials. CMS monitors the results of contractors’ IRR assessments to varying degrees but has not collected that information routinely from all contractor types. CMS officials said they review monthly reports from the CERT contractor about its IRR assessment results. CMS officials also told us they expect to see a roughly 98 percent accuracy rate for each month’s IRR assessment, which they said the CERT contractor usually achieves. Beginning with the new contracts expected to be awarded in 2014, RAs also will be expected to provide monthly information to CMS about their IRR assessments, according to CMS officials. In contrast, CMS does not routinely collect information from the MACs or ZPICs about their IRR assessments. According to CMS officials, CMS does not require MACs to report regularly on their IRR assessments, but agency officials may, at their discretion, discuss MACs’ IRR assessments during their routine on- CMS revised its requirements in October 2013 to site visits with MACs. state that MACs must report their IRR assessment results to CMS as directed, and CMS officials indicated that they will request MACs’ IRR information on an as-needed basis rather than requiring all MACs to provide specific IRR information on a predetermined schedule. Similarly, CMS officials may discuss ZPICs’ IRR assessments during ZPICs’ annual performance assessments or at other times, but have not collected information routinely about ZPICs’ IRR assessment results, according to CMS officials we spoke with. CMS has implemented additional quality assurance processes in which the MACs, ZPICs, and RAs have a sample of their claims reviews undergo external validation by CMS or an independent contractor, using clinical staff or coders, to assess the appropriateness of the contractors’ claims review decisions about whether the claim was paid properly according to Medicare coverage and billing rules. These validation efforts differ in frequency and process. While CMS has not implemented a separate external validation of the CERT contractor’s claims reviews, CMS officials told us that they have several other mechanisms to gauge the appropriateness of those reviews. MACs: According to CMS officials, in 2010, CMS implemented the Accuracy Project to learn more about MACs’ claims review processes and decision making, and to identify areas in which contractor training might be needed or where CMS could clarify or modify its guidance. For this project, a team of CMS clinical staff conducted validation reviews for a selection of MACs’ claims reviews. In 2010 and 2011, CMS reviewed more than 200 claims per year, and in each year CMS staff concurred with all but one of the MACs’ claims review decisions. In 2012, CMS increased the number of Accuracy Project staff and from September 2012 through February 2014 reviewed 1,160 claims and concurred with over 90 percent of the MACs’ decisions, according to CMS. CMS officials noted that this has been a limited effort to date, most recently focused on DME claims—specifically, power mobility devices. CMS officials said they plan to broaden this effort to include other services. For this broader effort, CMS plans to have the CERT contractor conduct validation reviews of a random sample of 100 claims per MAC. The CERT contractor will review the documentation the MACs used to reach decisions for those claims, such as medical records, and evaluate the accuracy of MACs’ decisions by determining whether the MACs properly paid, adjusted, or denied the claims on the basis of Medicare coverage, coding, and billing rules. In April 2014, CMS officials told us that the CERT contractor had just begun conducting validation reviews for several MACs. ZPICs: To assess the appropriateness of the ZPICs’ claims reviews, CMS staff examine a sample of those reviews when they conduct each ZPIC’s annual performance assessment. For each ZPIC, CMS staff select 5 investigations or cases of particular providers and then select 5 claims for each investigation or case, for a total of 25 claims. CMS clinical staff then assess whether the ZPICs’ decisions were consistent with CMS guidance and clinical judgment. According to CMS officials, they typically find that ZPICs’ claims reviews decisions are satisfactory. RAs: CMS has established an external validation process in which the independent RA validation contractor uses licensed clinical professionals and coders to assess the quality of each RA’s claims reviews. Each month the RA validation contractor reviews a random sample of up to 400 RA-reviewed claims (up to 100 claims per RA) that are proportional to the provider types that each RA determined had been paid improperly. CMS officials told us that the RA validation contractor sends monthly reports to CMS on the RAs’ claims review accuracy rates. According to CMS’s most recently published report to Congress on the RAs, the cumulative accuracy rates for fiscal year 2012 were between about 93 and 97 percent for the RAs. CERT contractor: While CMS does not require a separate external validation of the CERT contractor’s claims reviews, CMS officials told us that the expansion of the Accuracy Project to involve CERT contractor reviews of MACs’ claims will give them increased ability to examine the CERT contractor’s claims review decisionmaking. They added that they review the CERT contractor’s decisions on an as-needed basis, such as when MACs dispute the CERT contractor’s findings of the MACs’ improper payment rates, or if a provider raises a concern to CMS about a CERT contractor decision. CMS has strategies to coordinate internally among relevant CMS offices in developing the requirements for postpayment claims review contractors’ activities and has strategies to facilitate coordination among the contractor types. However, differences in contractor requirements have continued and there is less coordination between ZPICs and RAs compared to the coordination among other contractors. CMS has established strategies for coordination among the three CMS offices that oversee postpayment review contractors—the Center for Medicare, the Office of Financial Management, and the Center for Program Integrity—to review proposed new or updated requirements for contractors’ activities. This internal coordination is important because contractors have many postpayment claims review activities in common, but responsibilities for overseeing postpayment review contractors are distributed across seven components within three CMS offices. (See fig. 3.) Thus, coordination strategies among CMS’s offices are critical to help ensure that contractor requirements are consistent when possible and that the four types of contractors are conducting postpayment claims reviews efficiently and effectively. According to CMS officials, the Enterprise Electronic Change Information Management Portal system is formal in that it provides a uniform entry and validation process before any changes to CMS documents are finalized. In addition, there are written instructions for how CMS officials are to submit, share, and sign off on documents in the Enterprise Electronic Change Information Management Portal. contractors they were responsible for managing. As a result, one office can make changes to requirements for the contractors they manage that might lead to differences among the four types of contractors’ requirements, but these changes might not be thoroughly reviewed by all the offices. However, CMS’s internal coordination strategies have not resolved long- standing differences in requirements across contractor types. In our July 2013 report, we reported that inconsistencies in contractor requirements may impede efficiency and effectiveness of claims reviews by increasing administrative burden on providers. For example, contractors had different time frames for providers to submit documentation, which might confuse providers and reduce compliance. CMS has begun to take steps to make contractor requirements more consistent, where appropriate. For example, in October 2013, CMS began requiring MAC, RA, and CERT contractor ADRs to all give providers the same options for submitting documentation. In addition, CMS officials said that their new RA contracts will require RAs to establish an IRR process to assess their claims reviews. Our findings in this report indicate that variations in requirements continue to exist. Such variations may result in inefficient processes and present challenges for providers for responding to documentation requests. Variation in requirements across contractors also is inconsistent with OMB’s executive-agency guidelines to streamline service delivery and with having a strong internal control environment. Further, this variation does not follow a practice that we have identified to help facilitate and enhance collaborative efforts across organizational boundaries. CMS has established multiple strategies to facilitate coordination among postpayment claims review contractors. In addition to using the Recovery Audit Data Warehouse to help prevent duplicative claims reviews, CMS requires MACs, RAs, and ZPICs operating in the same geographic jurisdiction to establish Joint Operating Agreements (JOA) to facilitate coordination. CMS also sponsors meetings between different types of contractors. According to CMS officials, the JOA is a mechanism for different types of contractors to document how they plan to work together. For example, CMS officials told us JOAs are used by MACs and RAs to agree on methods of communication and levels of service related to improper payments, such as data sharing, file transmissions, data warehouse uploads, and appeals. Officials also said ZPICs use the JOAs to come to an agreement with the other contractors on how they will coordinate to avoid duplicating claims reviews and to exchange information on potential fraud. determined all the JOAs between the different contractor types had been agreed upon by the contractors and were actively in use. We reported previously that when implementing coordination strategies, agencies benefit from having participants document their agreement for how they will collaborate and that agencies should consider whether all relevant participants have been included in and regularly attend collaboration- related activities. Although CMS guidance on what should be included in these agreements varies by the contractors’ relationships to each other, in general, JOAs are to specify how the contractors intend to interact with one another. For example, CMS guidance states that JOAs between MACs and RAs should include a communication process and time frames for adjustments, recoupment, appeals, inquiries, and receipt of provider names and addresses. In addition to requiring JOAs, CMS holds regular meetings between different types of contractors to help them coordinate their workloads and to facilitate discussions of vulnerabilities and issues related to postpayment claims reviews. Depending on the meeting, some types of contractors are required to attend, while others are invited. For example, to help MACs and RAs better target their claims selection to identify improper payments, CMS has required these contractors to meet weekly to discuss vulnerabilities identified by RAs. CMS officials said that ZPICs also are invited to these meetings, but they are not required to attend. CMS also requires the MACs, CERT contractor, and RAs to attend CMS’s annual medical review training conference, where CMS and contractor staff discuss CMS policy, program integrity vulnerabilities, and other medical review issues. Although ZPICs are not required to attend this conference, some do. Other than CMS’s annual medical review training conference, ZPICs and RAs do not have structured meetings with each other to share information on vulnerabilities and potential fraud. All three ZPICs we spoke with said they meet with MACs on a regular basis to discuss medical review strategies, operational issues, and their JOAs with the MAC. According to one ZPIC, the interactions with the MAC ensure they are sharing best practices and receiving information expeditiously. However, two of the ZPICs we spoke with also said they do not coordinate with the RAs in their geographic jurisdictions. HHS’s OIG recently recommended that to ensure that RAs refer all appropriate cases of potential fraud, CMS should facilitate increased collaboration between RAs and ZPICs, such as by coordinating regular meetings to share information about potentially fraudulent coding or billing schemes and to advise RAs of emerging fraud schemes.recommendation. CMS officials also told us that the new RA statement of work for the upcoming procurement will include a requirement for the RAs to meet with the ZPICs in their geographic jurisdictions quarterly, at a minimum, to discuss trends in possible fraudulent billing. According to the OIG report, CMS concurred with OIG’s Established JOAs and regular meetings between different contractor types provide more assurance that postpayment claims reviews are conducted as efficiently and effectively as possible and opportunities to further reduce improper payments are not overlooked. Coordination among the contractors promotes sharing of information that can be critical to identifying vulnerabilities to improper payments. For example, while reviewing claims, each contractor may be identifying vulnerabilities to improper payment that may also be present in other jurisdictions, as well as improper payment issues that could be better addressed by another type of contractor. In addition, MACs’ and RAs’ claims reviews sometimes identify instances of potential fraud, which they are expected to refer to ZPICs for further investigation. Postpayment claims review contractors play an important role in helping CMS reduce improper payments in the Medicare program. Because different types of contractors conduct similar claims reviews, CMS guidance, oversight, and coordination of them is essential to maintaining an appropriate balance between detecting improper payments effectively and efficiently and avoiding unnecessary administrative burdens. CMS has taken a number of steps to guide, oversee, and coordinate its contractors’ postpayment claims review efforts. However, further actions by CMS could help improve the efficiency and effectiveness of its contractors’ efforts. CMS does not have sufficient information to determine whether its contractors are conducting inappropriate duplicative claims reviews. We found that CMS has conducted insufficient data monitoring to prevent the RAs from conducting inappropriate duplicative reviews. If the Recovery Audit Data Warehouse information on excluded claims is inaccurate, as we found is sometimes the case, the Recovery Audit Data Warehouse’s effectiveness in preventing the RAs from conducting inappropriate duplicative claims reviews is limited. In addition, while CMS has issued clear guidance for RAs and the CERT contractor about whether they are permitted to conduct duplicative reviews, it has not issued similar guidance for the MACs and ZPICs. If CMS does not intend for the MACs and ZPICs to conduct duplicative reviews, issuing complete guidance stating so is important to prevent inappropriate duplication. Furthermore, having consistent guidance and ensuring that contractors comply with the requirements that apply to them can improve the efficiency and effectiveness of contractors’ communication with providers. It is important that providers understand the postpayment claims review process, including what documentation they need to send to contractors, the steps in the review process, and their rights. More consistent requirements and better monitoring of contractors’ compliance with correspondence content guidance would increase CMS’s assurance that providers are given similar and sufficient information during claims reviews and that the correspondence is accurate and includes all of the content required. Although CMS has strategies to coordinate internally among the CMS offices that oversee postpayment claims review contractors—as well as strategies to facilitate coordination among the contractors themselves— differing requirements for the postpayment claims reviews conducted by different types of contractors continue to exist. CMS is currently working to address some of the differences, but will need to remain vigilant as requirements are updated in the future. Moreover, CMS must ensure that its current methods for ensuring effective collaboration among contractors are working as intended. The comparatively limited amount of required communication between ZPICs and other contractors addressing improper payment issues reduces CMS’s assurance that the four types of postpayment contractors that we reviewed are coordinating as effectively as possible to reduce improper payments and fraud. In order to improve the efficiency and effectiveness of Medicare postpayment claims review efforts and simplify compliance for providers, we recommend that the Administrator of CMS take the following four actions: monitor the Recovery Audit Data Warehouse to ensure that all postpayment review contractors are submitting required data and that the data the database contains are accurate and complete; develop complete guidance to define contractors’ responsibilities regarding duplicative claims reviews, including specifying whether and when MACs and ZPICs can duplicate other contractors’ reviews; clarify the current requirements for the content of contractors’ ADRs and results letters and standardize the requirements and contents as much as possible to ensure greater consistency among postpayment claims review contractors’ correspondence; and assess regularly whether contractors are complying with CMS requirements for the content of correspondence sent to providers regarding claims reviews. We provided a draft of this report to HHS and received written comments, which are reprinted in appendix II. In its comments, HHS agreed with our findings and concurred with all four recommendations. HHS also described steps it plans to take to remedy the issues we identified. We also provided portions of the draft report for review and comment to the contractors in our sample. We received responses via email from all but one contractor. The contractors generally agreed with our findings as applicable to their contractor type. Representatives from all four RAs commented on our finding that none of the RA results letters met the requirement to document in the letter a reason for conducting the review or the rationale for good cause for reopening the claims. Representatives from two RAs commented that they believed their results letters did sufficiently indicate the reason for the review, and representatives from three RAs pointed out that CMS had reviewed and approved the text of their letters. However, as we noted in the draft report, we determined that none of the RA results letters met this requirement because the text was not sufficient to provide a reason for review or rationale for good cause. In response to comments from the RAs, we have modified the text in the report to more prominently note that the results letters do refer providers to the contractor’s website or to the ADR to obtain the reason for the review. HHS and the contractors also provided technical comments, which we incorporated into the report as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services, the Administrator of CMS, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-7114 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made key contributions to this report are listed in appendix III. This appendix gives additional detail about two aspects of our methodology for addressing the report’s objectives. Specifically, we explain our methodology for selecting and assessing the sample of postpayment claims review contractors we interviewed as part of our work to address all four objectives. We also provide information on our methodology for selecting the correspondence—additional documentation requests (ADR) and results letters—from the contractors in the sample we used to examine how CMS’s requirements for contractor correspondence with providers help ensure effective communication. To learn about postpayment claims review contractors’ claims review efforts, we interviewed representatives from 11 postpayment review contractors. We selected all four Recovery Auditors (RA) because they conduct substantially more postpayment claims reviews than all the other contractors combined, and the Comprehensive Error Rate Testing (CERT) contractor, which reviews a nationwide random sample of claims. We also selected a nongeneralizable sample of 3 of the 16 Medicare Administrative Contractors (MAC), including 2 of the 12 MACs that process Part A and B claims and 1 of the 4 MACs that process claims for durable medical equipment. We selected these 3 MACs because they had been in operation for at least 6 months, performed postpayment claims reviews in 2012, and were geographically diverse. We selected a nongeneralizable sample of 3 of the 6 Zone Program Integrity Contractors (ZPIC) that had been in operation for at least 1 year and whose service areas included some of the same states served by the 3 MACs in our sample. To assess the extent to which CMS requirements for the content of contractors’ correspondence with providers help ensure effective communication, we focused our review on ADRs and results letters. We reviewed the Medicare Program Integrity Manual, Medicare Claims Processing Manual, Medicare Financial Management Manual, Medicare Contractor Beneficiary and Provider Communications Manual, CERT Manual, and the contractors’ statements of work to identify CMS requirements for this correspondence. Because there were some discrepancies and issues of clarity in the requirements for these sources, we confirmed the requirements with CMS. To assess the correspondence against these requirements, we asked the contractors in our sample to provide us with all correspondence associated with four claims they determined had been paid improperly and two claims that had been paid properly. To prevent bias, we asked them to select claims for which the ADR had been sent on a date that we randomly selected. If they had not sent ADRs on that date, we asked them to choose the closest date on which they had sent ADRs. Upon receiving the sample of contractor correspondence, we chose two frequently used postpayment review- related forms of correspondence—ADRs and results letters—to assess compliance. We limited our review of results letters to assess compliance for those that reported improper payments. The final sample we reviewed included 67 ADRs and 47 results letters. We assessed compliance with CMS requirements in effect as of the date on the letter. Each letter was reviewed by two of our staff working independently. The reviewers compared each requirement with each letter’s content to determine if a requirement was “met,” “not met,” “not applicable,” or “unknown.” Afterward, the reviewers met to resolve any differences in their scores. Their final score was checked by a third reviewer. We calculated a compliance rate for each letter by dividing the total number of applicable requirements that were met (numerator) by the total number of applicable requirements that were met or not met (denominator). An average compliance rate for each type of contractor was based on the sum of the contractors’ letter-specific numerators divided by the sum of the letter-specific denominators. Requirements for ADRs are listed in table 3 and for review results letters in table 4. We note in the tables several requirements that we did not include in our assessment of compliance and why. In addition to the contact named above, Sheila. K. Avruch, Assistant Director; Robin Burke; Carrie Davidson; Iola D’Souza; Carolyn Garvey; Leslie V. Gordon; Richard Lipinski; Elizabeth Morrison; Amanda Pusey; and Jennifer Whitworth made key contributions to this report. Medicare Program Integrity: Contractors Reported Generating Savings, but CMS Could Improve Its Oversight. GAO-14-111. Washington, D.C.: October 25, 2013. Medicare Program Integrity: Increasing Consistency of Contractor Requirements May Improve Administrative Efficiency. GAO-13-522. Washington, D.C.: July 23, 2013. GAO’s 2013 High-Risk Update: Medicare and Medicaid. GAO-13-433T. Washington, D.C.: February 27, 2013. Medicare Program Integrity: Greater Prepayment Control Efforts Could Increase Savings and Better Ensure Proper Payment. GAO-13-102. Washington, D.C.: November 13, 2012. Medicare Fraud Prevention: CMS Has Implemented a Predictive Analytics System, but Needs to Define Measures to Determine Its Effectiveness. GAO-13-104. Washington, D.C.: October 15, 2012. Program Integrity: Further Action Needed to Address Vulnerabilities in Medicaid and Medicare Programs. GAO-12-803T. Washington, D.C.: June 7, 2012. Medicare Integrity Program: CMS Used Increased Funding for New Activities but Could Improve Measurement of Program Effectiveness. GAO-11-592. Washington, D.C.: July 29, 2011. Improper Payments: Reported Medicare Estimates and Key Remediation Strategies. GAO-11-842T. Washington, D.C.: July 28, 2011. Fraud Detection Systems: Centers for Medicare and Medicaid Services Needs to Ensure More Widespread Use. GAO-11-475. Washington, D.C.: June 30, 2011. Improper Payments: Recent Efforts to Address Improper Payments and Remaining Challenges. GAO-11-575T. Washington, D.C.: April 15, 2011. Status of Fiscal Year 2010 Federal Improper Payments Reporting. GAO-11-443R. Washington, D.C.: March 25, 2011. Medicare and Medicaid Fraud, Waste, and Abuse: Effective Implementation of Recent Laws and Agency Actions Could Help Reduce Improper Payments. GAO-11-409T. Washington, D.C.: March 9, 2011. Medicare: Program Remains at High Risk Because of Continuing Management Challenges. GAO-11-430T. Washington, D.C.: March 2, 2011. Medicare Recovery Audit Contracting: Weaknesses Remain in Addressing Vulnerabilities to Improper Payments, Although Improvements Made to Contractor Oversight. GAO-10-143. Washington, D.C.: March 31, 2010. Medicare Contracting Reform: Agency Has Made Progress with Implementation, but Contractors Have Not Met All Performance Standards. GAO-10-71. Washington, D.C.: March 25, 2010.
Several types of Medicare contractors conduct postpayment claims reviews to help reduce improper payments. Questions have been raised about their effectiveness and efficiency, and the burden on providers. GAO was asked to assess aspects of the claims review process. Building on GAO's July 2013 report on postpayment claims review requirements, this report examines, among other things, the extent to which CMS has (1) data to assess whether contractors conduct duplicative postpayment claims reviews, (2) requirements for contractor correspondence with providers to help ensure effective communication, and (3) strategies for coordination of claims review activities. GAO reviewed CMS's requirements for claims reviews; interviewed CMS officials, selected contractors, and provider associations; analyzed CMS data; assessed a nongeneralizable sample of 114 pieces of contractor correspondence for compliance with requirements; and assessed CMS's requirements and oversight against federal internal control standards and other guidance. The Centers for Medicare & Medicaid Services (CMS) within the Department of Health and Human Services (HHS) has taken steps to prevent its contractors from conducting certain duplicative postpayment claims reviews—reviews of the same claims that are not permitted by the agency—but CMS neither has reliable data nor provides sufficient oversight and guidance to measure and fully prevent duplication. The four types of contractors GAO reviewed that examine providers' documentation to determine whether Medicare's payment was proper included Medicare Administrative Contractors (MAC), which process and pay claims; Zone Program Integrity Contractors (ZPIC), which investigate potential fraud; Recovery Auditors (RA), tasked with identifying on a postpayment basis improper payments not previously reviewed by other contractors; and the Comprehensive Error Rate Testing (CERT) contractor, which reviews claims used to annually estimate Medicare's improper payment rate. CMS implemented a database to track RA activities, designed in part to prevent RAs, which conducted most of the postpayment reviews, from duplicating other contractors' reviews. However, the database was not designed to provide information on all possible duplication, and its data are not reliable because other postpayment contractors did not consistently enter information about their reviews. CMS has not provided sufficient oversight of these data or issued complete guidance to contractors on avoiding duplicative claims reviews. CMS requires its contractors to include certain content in postpayment review correspondence with providers, but some requirements vary across contractor types and are not always clear, and contractors vary in their compliance with their requirements. These factors can lead to providers receiving less information about the reviews and thus decrease effective communication with them. In addition, the extent of CMS's oversight of correspondence varies across contractors, which decreases assurance that contractors comply consistently with requirements. In the correspondence reviewed, GAO found high compliance rates for some requirements, such as citing the issues leading to an overpayment, but low compliance rates for requirements about communicating providers' rights, which could affect providers' ability to exercise their rights. CMS has strategies to coordinate internally among relevant offices regarding requirements for contractors' claims review activities. The agency also has strategies to facilitate coordination among contractors, such as requiring joint operating agreements between contractors operating in the same geographic area. However, these strategies have not led to consistent requirements across contractor types or full coordination between ZPICs and RAs. GAO previously recommended that CMS increase the consistency of its requirements, where appropriate, and the HHS Office of Inspector General has recommended steps to improve coordination between ZPICs and RAs. GAO recommends that CMS take actions to improve the efficiency and effectiveness of contractors' postpayment review efforts, which include providing additional oversight and guidance regarding data, duplicative reviews, and contractor correspondence. In its comments, the Department of Health and Human Services concurred with the recommendations and noted plans to improve CMS oversight and guidance.
Over time, cruising has developed into a highly concentrated industry with three primary carriers. At the end of September 2003, two companies, Carnival Cruises Lines and Royal Caribbean Cruises, Ltd., controlled 86.4 percent of the market in North America, with NCL being the next largest cruise provider, holding a little less than 9 percent of the North American market. (See fig. 1.) These companies are foreign-owned and operate foreign-built vessels. Carnival Cruise Lines is incorporated in Panama with its North American ships flying the Bahamian or Panamanian flag. Royal Caribbean is a Liberian corporation, with ships flying the Bahamian or Norwegian flag. NCL is a subsidiary of Star Cruises, a Bermuda corporation headquartered in Hong Kong, with its ships in North America flying the Bahamian flag. While there are several U.S. companies in the cruise industry, such as Disney and Radisson Seven Seas, these companies also elect to operate foreign-built vessels under a foreign flag in order to operate under the same capital and operating cost structure as their foreign competitors. Currently, there are no large U.S.-flag cruise ships in operation, and no large new cruise ships have been built in the United States since 1958. This use of foreign-built ships is largely due to the higher costs anticipated when building a ship in the United States, rather than in shipyards in Italy, Germany, and elsewhere that have the infrastructure, expertise, and economies of scale for this segment of the market. Over the past decade, several bills have been introduced into the U.S. Congress with the objective of stimulating the development of a U.S.-flag fleet and growth in the domestic cruise ship trade, the travel industry, and port cities, although none have been enacted. Generally, these bills would have allowed foreign ships either to operate in the domestic trade or to be reflagged with the U.S. flag under certain specified conditions. For example, the U.S. Cruise Vessel Act (S. 127), introduced in 2001, would have allowed U.S.-owned, foreign-built cruise ships to enter the domestic market for a limited time if the operators agreed to build replacement vessels in the United States. This law was designed to allow new companies to enter the domestic market with existing vessels and immediately increase the size of the U.S. commercial fleet, thus providing new jobs for merchant mariners. Under the proposal, these foreign-built cruise ships would have been required to fully comply with all applicable U.S. laws, regulations, and tax obligations. Many federal agencies oversee U.S. maritime policy. For example, in the Department of Transportation, the Maritime Administration’s (MARAD) primary mission is to strengthen the U.S. maritime transportation system— including infrastructure, industry, and labor—to meet the economic and security needs of the nation. MARAD also seeks to ensure that the United States maintains adequate shipbuilding and repair services, efficient ports, and effective intermodal water and land transportation systems. MARAD programs are designed to promote the development and maintenance of an adequate, well-balanced, U.S. merchant marine. MARAD originally financed two of the ships that NCL will be operating in the Hawaiian Islands through MARAD’s Title XI loan guarantee program under a project known as “Project America,” that provided loan guarantees to help construct two new cruise vessels for American Classic Voyages in a U.S. shipyard for use in the Hawaiian Islands. Congress also granted American Classic Voyages a monopoly in the Hawaiian market for the life of the vessels. However, American Classic Voyages filed for bankruptcy in 2001, and the partially completed hull of one ship and parts for the other were purchased by NCL for $29 million. Subsequent to the purchase, NCL obtained the exemption, allowing them to complete these ships in a foreign shipyard and still operate them in Hawaii under the U.S. flag. The Coast Guard and Customs and Border Protection (CBP), within the Department of Homeland Security, are generally responsible for administering and enforcing maritime laws and U.S.-flag requirements, including the PVSA and U.S. vessel documentation laws, as well as the Jones Act. The Coast Guard handles documentation requirements for U.S.-flag ships—such as determining whether vessels meet the U.S.- ownership and crewing requirements in order to operate under the U.S. flag—and U.S.-built requirements in order to operate in domestic trade. Through this process the Coast Guard provides endorsements to vessels defining the type of trade in which they are allowed to engage, e.g., foreign trade, domestic trade, or fishing. The Coast Guard also conducts quarterly inspections on all vessels embarking passengers at U.S. ports. CBP also has a role in administering the PVSA, such as publishing rulings on the legality of proposed itineraries. CBP also has civil enforcement authority under the PVSA, with the ability to levy penalties on any passenger vessel operators engaging in service in the domestic market without the relevant Coast Guard endorsements. The current penalty that can be levied against a ship operator for a violation of the PVSA is $300 per passenger. The Federal Trade Commission (FTC) is responsible for ensuring that the nation’s markets are vigorous, efficient, and free of restrictions that harm consumers. The FTC exists to protect consumers by enforcing federal consumer protection laws and conducting economic research and analysis to inform all levels of government. In this regard, FTC conducted an analysis of competition in the cruise market and the potential competitive affects of a merger between two of the largest cruise lines and issued its report in October 2002. After FTC’s study of the cruise market, in April 2003, Carnival Corporation acquired P&O Princess Cruises. Prior to the acquisition, Carnival Corporation was already the world's largest cruise company; after the acquisition, Carnival Corporation became even larger, with 13 separate brands, 66 cruise ships and 17 more on order, and combined annual revenues of $6.9 billion. In 1886, Congress passed the PVSA to protect the U.S. domestic maritime transportation industry from foreign competition. To provide this protection, it penalizes foreign vessels that transport passengers solely between U.S. ports. Many cruises provided by foreign vessels are to international destinations and, therefore, are not affected by the PVSA; however, several rulings and decisions interpreting the PVSA have expanded possible itineraries for foreign cruise vessels between U.S. ports that were once restricted. For example, rulings and decisions have found circumstances where voyages between two U.S. ports by foreign vessels do not violate the PVSA when the primary purpose of the voyage is to visit foreign ports. In addition, rulings and decisions have allowed foreign vessels to visit several U.S. ports on an itinerary, so long as a foreign port is included and the vessel disembarks its passengers at the port of embarkation. In these circumstances, the voyages in question are not considered to be domestic transportation between two U.S. points. The PVSA was originally designed to prevent U.S.-based vessels from facing strong competition in the domestic transportation market from maritime nations, such as Great Britain and Canada. Specifically, there was a concern about competition from Canadian vessels that were transporting passengers across the Great Lakes. The PVSA originally stated “no foreign vessel shall transport passengers between ports or places in the United States, either directly or by way of a foreign port, under a penalty of $2 for each passenger so transported and landed.” Congress originally thought that the $2 penalty per passenger would discourage this practice. Some industry associations and U.S. courts view the PVSA, U.S. vessel documentation laws, and the Jones Act, as serving other purposes, including providing a ready fleet in times of national defense, sustaining a U.S. merchant marine, and supporting the U.S. shipbuilding industry. U.S. courts have said that the PVSA and the Jones Act have helped to secure the national defense by maintaining, “a merchant marine of the best equipped and most suitable types of vessels sufficient…to serve…in time of war or national emergency.” Because vessels in the domestic trade must be U.S.- crewed, labor groups view the laws as protecting jobs for the U.S. merchant marine. According to data supplied by MARAD, over 1,000 passenger vessels are operating under the U.S. flag, employing U.S. seamen, including ferries, steamboats, and small cruise vessels; however, the last large U.S.-flag overnight cruise vessels ceased operations when American Classic Voyages declared bankruptcy in October of 2001. In addition, because the PVSA and the Jones Act protect the domestic maritime transportation market for U.S.-built ships, they also support U.S. shipyards. While several U.S. shipyards routinely build passenger vessels for U.S.-flag operators such as ferry operators and steamship operators, U.S. shipyards have not built large overnight, ocean-going cruise ships, and the last large passenger liner built in the United States was completed in 1958. Several administrative rulings and judicial decisions have identified limited exceptions to the PVSA that allow certain vessel operations between U.S. ports by foreign passenger vessels. One significant decision—which has allowed passenger travel between U.S. ports by foreign vessels as long as a distant foreign port is included—was a 1910 Attorney General opinion. This opinion states that an around-the-world cruise that started in New York and touched numerous foreign destinations and ended in San Francisco did not violate the PVSA because the voyage could not be considered domestic trade. The Attorney General made this determination on the supposition that the purpose of the trip was not to travel from one U.S. port (New York) to another (San Francisco), but to travel to different locations around the world. In 1940, a federal court also found that the transportation of passengers on a foreign vessel from New York to Philadelphia that stopped in a foreign port was not “detrimental to the coast wise monopoly sought to be assured to U.S. vessels.” The court said this was not a violation of the PVSA because the vessel, which was originally scheduled to return to New York, was forced to dock at the Philadelphia port because it was carrying perishable cargo, requiring passengers to disembark in Philadelphia. The court found that it was not the purpose of the trip to transport passengers from New York to Philadelphia. Two regulations and rulings by CBP have also contributed to expansion of the number and variety of itineraries in which foreign-flag vessels can engage, from and between U.S. ports. First, based on the 1910 Attorney General Opinion, CBP, in its regulations, interprets the PVSA to allow a foreign vessel to embark passengers at one U.S. port and disembark passengers at a different U.S. port, so long as the vessel makes a port of call at what the regulations define as a “distant foreign port,” such as Aruba or Curacao. Second, a 1985 CBP regulation allows round-trip cruises from a U.S. port, that touch on a “nearby foreign port”—defined by the regulation as such places as Canada, Mexico, or Bermuda—to visit other U.S. ports and allow passengers to go ashore temporarily, as long as they return to the ship. For example, foreign vessels can embark passengers in New York, make a quick stop in Canada or Bermuda, then cruise to several other U.S. ports and return to New York without violating the PVSA. CBP’s decision to allow these types of itineraries was based on the supposition that the PVSA put some U.S. ports at a disadvantage in competition for tourist business. In its response to opposing comments, CBP stated that it is “of paramount importance in this area to consider the primary object of passengers in taking a voyage,” citing both the 1910 Attorney General Opinion and the 1940 court case as the authority for doing so. Table 1 summarizes these key rulings and decisions regarding the PVSA. The exemption granted to NCL to be able to operate in Hawaii will likely have little impact on how the PVSA, U.S. vessel documentation laws, or the Jones Act are implemented by CBP and the Coast Guard. NCL’s exemption is from the U.S.-built requirement of U.S. vessel documentation laws, which allows NCL to operate foreign-built ships under the U.S. flag in limited domestic itineraries. Therefore, the PVSA will not apply to these vessels, as the PVSA only penalizes foreign vessels carrying passengers between U.S. ports. In addition, the Coast Guard deals with vessels on a case-by-case basis; and this exemption is specific to NCL’s three vessels and cannot be applied to any other vessels in any other trades. Furthermore, although Congress has enacted several specific exemptions to the PVSA, allowing foreign vessels to serve particular regions of the United States; no previous exemption has had an impact on the implementation of any other related laws. Exemptions have also been allowed under the Jones Act with no corresponding impact on the PVSA. In 2003, Congress effectively gave NCL an exemption from U.S. vessel documentation laws in order to operate certain foreign-built passenger vessels in a limited domestic area. Specifically, NCL is allowed to operate the two Project America vessels completed in a foreign shipyard and to reflag one additional foreign-built ship under the U.S. flag, in “regular service” in Hawaii. These ships are not required to meet the U.S.-built requirement in order to provide service in these limited domestic itineraries and are considered qualified for this purpose; therefore, they are not subject to penalties under the PVSA, since the PVSA only applies to foreign vessels carrying passengers between U.S. ports. The exemption requires that NCL operate these ships in regular service, as defined in the exemption as the “primary service in which the ship is engaged on an annual basis,” between the islands of Hawaii and specifically prohibits NCL from transporting paying passengers to ports in Alaska, the Gulf of Mexico, or the Caribbean. There may be some ambiguity on what NCL’s obligations are for providing regular service to the Hawaiian Islands, as the exemption was silent on service to the East and West coasts, and therefore NCL is not prohibited by the exemption from providing some service to these destinations, as long as the regular service requirement is met. CBP officials declined to speculate on how the regular service provision might be enforced if there is a challenge to the itineraries that NCL operates. Several maritime lawyers we spoke with suggested this requirement might be interpreted to mean that at least 51 percent of the individual vessel’s operations must be conducted in Hawaii. NCL officials told us, however, that their current plans are to use these vessels in the Hawaiian Islands year round. All of the allowances and restrictions of the exemption are specific to the two Project America vessels and the additional vessel to be reflagged by NCL and do not amend the PVSA or U.S. vessel documentation laws. Coast Guard officials stated that they have already confirmed that the vessel NCL has under construction, and the second vessel NCL intends to construct abroad, are the vessels referred to in the exemption; and NCL has already identified the vessel to be reflagged; therefore, the allowances of the exemption apply only to the three vessels. Coast Guard and CBP rulings regarding these laws are made on a case-by-case basis; and because the exemption is unique to the identified vessels, it should create no precedent on the implementation of these laws regarding other vessels. NCL’s exemption does not allow for further exemptions for other foreign cruise lines to be able to operate foreign-built vessels in Hawaii or anywhere else in the domestic trade. Additional legislation would be required to allow for any further domestic operations by foreign-built vessels. In addition, this exemption will likely not have any legal impact on the Jones Act and its restrictions on shipping cargo between U.S. points. Although interest groups and labor organizations link the PVSA and the Jones Act philosophically, as being parallel laws for passengers and cargo, respectively, numerous amendments and changes have been made to each law that have not affected the other. For example, in 1920, the PVSA was modified to allow permits to be issued for the transport of passengers by foreign vessels to or from Hawaii, which lasted for 2 years. Furthermore, an exception to the PVSA was made in 1938 to allow for the transport of passengers by Canadian vessels between the New York ports of Rochester and Alexandria Bay. More recently, Congress passed the Puerto Rico Passenger Ship Act, which allows vessels not qualified to engage in the domestic trade to carry passengers between U.S. ports and Puerto Rico and between Puerto Rico ports. None of these exemptions has had an impact on transporting cargo, which would fall under the jurisdiction of the Jones Act, or on justifying the transportation of passengers outside the specific scope of the exemption. Furthermore, the rulings and decisions discussed earlier that have allowed foreign-flag vessels to transport passengers from and between U.S. ports, if a foreign port is visited, do not extend to freight transportation. For example, a foreign ship can pick up passengers in New York, travel to Paris and pick up passengers there, and return to Boston to disembark the passengers without violating the PVSA; however, the same ship cannot take freight cargo from New York, pick up additional cargo in Paris, and drop off the cargo in Boston without violating the Jones Act. The exemption allows NCL to offer exclusive all-domestic itineraries in Hawaii because no other large U.S.-flag passenger ships currently offer such service, and no other foreign-built ships can offer all-domestic itineraries. However, despite this advantage, NCL will likely have limited ability to exert pricing power on its exclusive itinerary because it will still have to compete with other vacation options. In addition, NCL’s exclusive right to operate foreign-built ships in U.S. domestic trade creates an additional obstacle for any large cruise lines attempting to compete in the domestic market under the U.S. flag. NCL is able to complete building the ships abroad at a lower cost than they could be completed in the United States, while any would-be entrant into the domestic market would have to build a ship in the United States and would therefore face a higher capital cost structure than NCL. However, prior to the exemption there were already substantial barriers to U.S.-flag entrants into domestic trade due not only to higher capital costs, but also to higher operating costs associated with the U.S. flag. Potential economic benefits from the exemption include expanded choice of cruise itineraries for consumers, enhanced sustainability of competition in the industry, employment growth, and generation of tax revenues. These benefits are contingent on NCL’s continued U.S.-flag operations, which analysts speculate might not be able to compete successfully with lower-cost, foreign-flag operations. As previously mentioned, the exemption allows NCL the exclusive right to operate certain foreign-built, U.S.-flag ships on wholly domestic Hawaiian itineraries. No other large U.S.-flag passenger vessels currently operate in domestic trade; and foreign-flag, foreign-crewed cruise ships cannot offer wholly domestic itineraries because of the PVSA. Therefore, although the exemption does not explicitly exclude any carriers from offering these itineraries, no other carriers are able to offer the same itineraries. In addition, prior to obtaining the exemption and prior to the bankruptcy of American Classic Voyages, NCL already had an exclusive itinerary stopping at Fanning Island, in the Republic of Kiribati, the closest foreign port to Hawaii. NCL’s agreement with Fanning Island for exclusive access, which lasts for a limited period, already gave NCL the ability to offer 7-day Hawaiian cruises, not feasible for other cruise lines that must include a farther foreign port, like Vancouver, Canada, or Ensenada, Mexico, which are 4 to 6 days sailing time to Hawaii. Figure 2 compares NCL’s exclusive 7- day domestic itinerary, scheduled to be available in the summer of 2004, with Hawaiian itineraries of foreign-flag vessels. Because NCL has the ability to offer unique Hawaiian Island itineraries without including foreign ports, NCL’s interisland cruises on its exempted ships will allow cruisers to spend more daytime hours in ports than other existing Hawaiian Island cruises. NCL’s proposed itinerary for wholly domestic Hawaiian cruises includes 59 daytime hours in port; however, NCL’s current 7-day cruise, which includes a stop at Fanning Island, offers only 28 daytime hours in ports. In general, the greater number of hours in port is seen as more appealing to consumers. While NCL can operate exclusive itineraries, the exemption likely conveys only limited pricing power to NCL, even in the absence of another cruise line offering identical itineraries. According to a comprehensive cruise market analysis conducted by the FTC in 2002, a single cruise itinerary does not constitute a market; rather, competitive conditions should be assessed in the context of a market that includes all vacation options or, minimally, all other cruise options. Therefore, although no cruise lines will compete directly on the domestic itineraries, NCL will continue to face competition from comparable vacation options, such as land vacations and similar cruises in different geographic areas. NCL will also compete with foreign-flag vessels that operate with lower costs on other itineraries that include Hawaii. Those foreign-flag vessels could offer a lower price than NCL, which would make any theoretical attempt at a price increase by NCL unsustainable. One of the reasons for FTC’s broad market definition is its finding that cruise passengers are highly sensitive to price changes. In other words, an attempt by a cruise line to raise prices above competitive levels likely results in significantly fewer bookings. NCL anecdotally confirmed this finding, citing a decline in its bookings following an attempt to raise prices by about 3 to 4 percent on its 2003 Norwegian Star, 7-day Hawaiian-Fanning Island itineraries—which had no competition from any other cruise line on the same itinerary—after showing strong sales during 2002. From the outset, over a year from sailing dates, sales were slower than in 2002 for the same cruise, and NCL was forced to reduce its prices to fill the ship, resulting in approximately 8 percent lower revenue yields by the sailing date on the 2003 cruises compared with the 2002 cruises. NCL has a large capital cost advantage over potential competitors, who might attempt to build ships entirely in the United States for operation under the U.S. flag because the exemption permits NCL to complete construction of its U.S.-flag ships in a foreign shipyard at a lower cost than a comparable ship built in a U.S. shipyard. Unless they also receive an exemption from the U.S.-built requirement, cruise lines entering the domestic market would have to build their ships in U.S. shipyards or refurbish an existing U.S. built vessel overseas. Such building costs, based on estimates from Project America, compared with contract costs for foreign-built ships, would likely be much higher. For example, we compared the contract cost to construct the first Project America ship with the total projected cost for NCL’s Pride of America, built from the partially U.S.-built hull of the first Project America ship now being completed overseas. The Project America contract cost was between 35 and 54 percent, or $140 to $190 million, higher than total cost projections to complete the Pride of America in a German shipyard, as shown in figure 3. The disparity is likely even larger because the actual costs of the Project America ships were expected to exceed the contract costs. Incorporating adjustments to the Project America contract costs, the cost differential ranges from 71 to 95 percent higher, or $284 to $334 million higher. Cruise officials and shipbuilders state that U.S. construction costs are higher than foreign construction costs because U.S. shipyards have not developed technical capability, a reliable supply chain, and economies of scale to build cruise ships competitively. According to one shipbuilder we spoke with, while U.S. shipyards are experienced at building complex cargo and military vessels, cruise ships require wholly different construction techniques; and U.S. shipyards have not developed certain technical capabilities. One official asserted that U.S. shipyards might become competitive if they partner with foreign shipyards to learn the latest technology. In addition, officials from the American Shipbuilders Association acknowledge that, while U.S. shipyards currently have the ability to build the hull and superstructure of a cruise ship, unlike European shipbuilders, U.S. shipbuilders do not have established and reliable supply chains for certain materials and other structures on a cruiseship, which are critical to efficient and timely completion of cruiseships. These officials said that they expect that the capital cost differential would be negligible if the U.S. shipbuilding industry grew and realized economies of scale; however, such growth seems unlikely given the current lack of demand for U.S.-built cruise ships and concerns about technical capabilities and undeveloped supply chains. Moreover, ships must be built in order for economies of scale to be realized, so the first ships that would have to be built for any would-be U.S.-flag operation will likely have higher capital costs than NCL’s vessels. While the exemption affords NCL a capital cost advantage over would-be entrants, acquiring financing for U.S. ship construction may not be any more difficult because of the exemption. In theory, financiers would be less willing to provide financing for capital costs to an operator who will compete in a market with an existing competitor who has a lower capital cost structure. However, industry financial analysts we spoke with said that acquiring financing was equally difficult prior to the exemption because of the presence of competing lower-cost, foreign-flag cruise lines and would not necessarily be more difficult once NCL begins providing U.S.-flag service in the Hawaiian Islands. Furthermore, an official from the Office of Ship Financing within MARAD said that, while theoretically the NCL presence in the U.S. domestic market could affect decisions about applications for new vessels, they have not seen and do not expect to see any impact from the NCL exemption. They said that they receive so few applications for large cruise ships that they are unable to determine if the number of applications has declined because of the NCL exemption. Furthermore, no applications for financing of large cruise ships have been denied or withdrawn because of the NCL exemption or NCL’s expected presence in the U.S. domestic market. Prior to the NCL exemption, cruises offered by lower cost foreign-flag vessels already limited the likelihood of cruise lines entering the domestic market. With the possible exception of Hawaii, the close proximity of foreign ports-of-call in Canada, Mexico, Bermuda, and the Caribbean allows foreign-flag ships to serve U.S. cruise demand without meeting the requirements of operating under the U.S. flag and adding significant time or fuel costs to the voyages. Figure 4 shows examples of cruise itineraries between U.S. ports that foreign-flag vessels can offer. The availability of foreign-flag service on U.S. itineraries that include a foreign port-of-call reduces the likelihood that potential U.S.-flag carriers can offer competitive prices because U.S.-flag ships have higher capital and operating costs than foreign-flag ships. In addition to higher ship construction costs discussed earlier, according to an industry trade organization, wage costs on U.S.-flag ships could range between 30 and 100 percent higher than wage costs for a similar foreign-flag ship due to compliance with U.S. labor laws that require minimum wage, overtime compensation, payment of social security tax, and protection and indemnity coverage, which do not apply to foreign-flag vessels. According to NCL officials, wage costs for their U.S.-flag operations will be 100 to 150 percent higher than wage costs for their foreign-flag operations. Cruise officials also stated that due to regulations pertaining to overtime and labor requirements for U.S. seafarers, they would likely have to hire more U.S. workers at higher wages to serve the same number of passengers. Finally, U.S.-flag ships are liable for corporate income taxes, while foreign-flag ships typically incorporate in countries where their income is tax-exempt, resulting in an additional cost advantage for foreign vessels. See appendix II for additional information on laws that apply to U.S.-flag ships. Several economic benefits might be generated as a result of NCL’s exemption. These benefits include expanded consumer choice, continued competition in the industry, employment growth and generation of tax revenues. The exemption expands consumer choice by allowing NCL to offer previously unavailable cruise itineraries. Hawaiian interisland cruises without a foreign port-of-call have not been available to potential cruisers since 2001, when American Classic Voyages filed for bankruptcy. As previously noted, following the exemption, NCL will operate exclusive interisland Hawaiian cruises on certain U.S.-flag ships. These new interisland cruises will be provided by cruise ships offering many of the amenities previously available only on foreign-flag ships. The exemption could improve NCL’s position relative to its competitors in the highly concentrated North American cruise market. According to MARAD data from July to September of 2003, Carnival and Royal Caribbean control a combined 86.4 percent of the North American cruise market, while NCL is the third largest firm with 8.8 percent of the market. NCL’s ability to offer unique domestic itineraries, primarily in Hawaii, affords NCL an opportunity to further differentiate itself from its primary competitors. NCL’s differentiation is important because it provides travel agents with an incentive to sell NCL’s products. Officials from the American Society of Travel Agents and cruise lines agree that recommendations by travel agents play a significant role in determining which cruises customers choose to buy. While the share of airline and land vacation purchases made through travel agents has declined in recent years, travel agents still sell approximately 90 percent of all cruises. If NCL only offered the same itineraries as Carnival and Royal Caribbean, travel agents may have an incentive to discontinue sales of NCL products, because travel agents are paid commissions that often increase with the number of cruises sold on a particular cruise line. Without travel agents endorsing its products, NCL could have difficulty competing with Carnival and Royal Caribbean. However, the unique Hawaiian cruise products that NCL can now offer help NCL to continue to be the third major firm in the market. If there are only two major players in a market, there is a much higher probability of the two firms coordinating higher prices, thus hurting consumers. The recent acquisition of P&O Princess Cruises by Carnival Corporation resulted in a reduction from four major competitors to three. The FTC’s decision to not challenge the merger stated that a reduction from three to two major competitors would likely be more problematic for consumers. NCL’s operations resulting from the exemption will create jobs on the exempted ships and where it offers itineraries, and they will likely increase tax revenue. According to NCL’s analysis of the Hawaiian market, its expanded operations will generate about 2,400 full-time shipboard jobs and additional shoreside employment in Hawaii. This estimate seems reasonable, because NCL must hire at least 800 U.S. employees per ship for three ships, as well as additional land-based employees. Some of these jobs might be transfers of jobs from other states to Hawaii and, thus, would not represent new benefits to the U.S. economy. An NCL consultant estimates total annual tax revenues from the exemption operations to be $126.5 million, including employee income taxes and social security taxes, airfare taxes, and customs, immigration and ship passenger taxes. In addition, NCL’s U.S. subsidiary, NCL America—which will operate the exempted ships in order to meet the U.S.-ownership requirements needed to register the vessels under the U.S. flag—will be liable for corporate income taxes on any profits it earns; and it will be subject to the payment of employer payroll taxes in Hawaii. NCL estimates passenger expenditures will bring an additional $355 million annually to the regions where NCL operates. This value assumes that all vessels operate at full capacity. These passenger expenditures represent a net benefit to the U.S. economy only when these passengers choose the domestic NCL cruise over a foreign vacation or other foreign spending. To the extent that the passengers’ alternatives were a different U.S. vacation or other discretionary spending in the United States, then this expenditure figure only represents a transfer of revenues to the region where the cruise is operating from other U.S. regions. Most of the benefits described above will materialize only if NCL continues to operate cruise ships under the U.S. flag. However, as noted above, industry analysts question NCL’s ability to operate the interisland Hawaiian cruises profitably. Analysts speculate that these cruises might not be profitable since they will still have to compete with foreign-flag cruises with significantly lower operating costs than NCL, though on different itineraries. Analysts also expressed concern that NCL is deploying too much capacity for the uncertain Hawaiian market demand. According to Cruise Lines International Association, Hawaiian cruises generated only about 3 percent of the business in the North American cruise market in 2002. NCL plans to grow the Hawaiian market by 23 percent each year for the next 5 years, resulting in Hawaiian destinations comprising 6 percent of the North American cruise market by 2007. This plan is quite aggressive, considering that industry trade groups expect the cruise market in general to grow 10 percent each year. If NCL is not profitable operating the exempted vessels in the United States, analysts speculate that NCL will seek government approval to reflag the vessels and operate them in foreign trades. NCL could continue to serve the Hawaiian market with the reflagged vessels, if the itinerary included a stop at Fanning Island or another foreign port. In this case, the exclusive interisland cruise options for consumers would no longer be offered, jobs for U.S. crew and the associated tax revenue would be lost, and NCL would not be liable for U.S. corporate income tax. In addition, if NCL is unable to operate successfully under the U.S. flag in Hawaii, possibly the most desirable market protected under the PVSA, there will be further disincentive for any other cruise line to attempt to operate under the U.S. flag, thus limiting the potential development of the U.S.-flag cruise vessel fleet. Granting similar exemption to ease entry into the domestic trade could lead to additional benefits for ports and port cities, the merchant marine and consumers; however, it is unclear how many cruise lines would choose to enter if they were permitted to operate foreign-built ships under the U.S. flag. For certain unique itineraries, where foreign vessels cannot easily operate with a nearby foreign port, such as in Hawaii, one-way cruises in Alaska, or short 3 to 4-day itineraries on the east or west coasts, some potential exists for U.S.-flag ships to enter the market. However, there are substantial disincentives to operating under the U.S. flag due to (1) operating cost differentials between the would-be U.S.-flag entrant and foreign-flag ships that still offer somewhat similar itineraries, but include a foreign port, (2) labor conditions and ship requirements, and (3) uncertain market conditions. Moreover, entry from additional ships exempt from the U.S.-built requirement could have a negative impact on the U.S. shipbuilding industry and small U.S.-flag cruise ships, though these impacts are likely to be minimal if the U.S.-built requirement is waived only for large cruise ships. Ports and port cities, the merchant marine, and consumers could benefit if additional exemptions to the U.S.-built requirement led to new entrants providing U.S.-flag cruise service. Additional domestic cruises could create more activity for the ports and result in more jobs and increased spending in port cities. U.S.-flag ships also would employ U.S. seamen, adding to the base of trained maritime employees who could serve the country in a time of emergency. Moreover, potential entrants could offer more cruise options and new itineraries to consumers. For example, a 1997 study conducted for the California State Tourism Board found that with similar exemptions to operate foreign-built vessels under the U.S. flag, cruise lines could offer cruise itineraries on the California coast to smaller ports, such as Santa Barbara and Monterey, resulting in more tourist dollars in those areas. However, if new domestic cruises primarily replaced existing foreign-flag service, with minor itinerary changes caused by eliminating foreign ports- of-call, the benefits to ports, port cities and consumers might be minimal. On the east coast, for example, Carnival currently offers cruises on a foreign-flag ship—round-trip from New York including stops in Boston, Massachusetts; Portland, Maine; and Canada. If U.S.-flag vessels replaced the foreign-flag vessels offering east coast cruises and had itineraries running from New York to Portland without the stop in Canada—but including the same ports-of-call as the former Carnival cruise—ports, port cities and consumers would experience very little additional benefit from these cruises. Additional cruises to U.S. ports that foreign-flag vessels continue to serve and cruises to different U.S. ports than foreign-flag vessels currently serve are the only source of benefits to ports, port cities, and consumers. While some potential benefits exist, industry officials said that most cruise lines are not likely to enter the domestic market, even if they could build ships outside of the United States because of operating cost differentials, different ship standards, and uncertain market conditions. As previously noted, U.S.-flag operating costs are significantly higher than foreign-flag operating costs. The wage differential is so great that an official from one cruise line stated that the cruise line would prefer to employ foreign workers for any non-U.S. domestic itineraries offered on a U.S.-flag ship. The official noted that it would be difficult to hire a separate seasonal U.S. crew to work on a U.S.-flag ship, which may operate domestic itineraries only at certain times of the year. U.S.-flag cruise ships also must meet U.S. building standards, which sometimes conflict with international standards. For example, an industry official cited different wiring configurations required on U.S. ships. One cruise line official stated that the cruise line he represents would not specially build a ship to comply with U.S. standards only to be able to operate the ship in domestic trade, given the existing operating cost differentials. Furthermore, cruise officials and industry analysts question whether U.S.-flag operations can be profitable since lower cost foreign-flag ships can serve similar itineraries and demand is unknown for domestic destinations. Despite all the expected difficulties and disadvantages, representatives of two cruise lines said they would explore entry into some domestic markets if they were given exemption from the U.S.-built requirement. According to these representatives, they would consider testing the Alaskan and Hawaiian markets, and short coastal cruises because of their unique attributes. In Alaska, one-way cruises are popular and currently cannot be offered from a U.S. port, such as Seattle, due to the PVSA. In Hawaii, the nearest foreign port adds at least 2 days of sailing time to the itinerary. Short coastal cruises on the east or west coasts are attractive because including a foreign port would lengthen the cruise. However, even these attractive markets have factors deterring U.S.-flag operations. Foreign-flag ships currently serve the one-way Alaskan trade embarking in Vancouver. These operators would still have a competitive advantage over U.S.-flag operators granted an exemption from the U.S.- built requirement and operating out of Seattle. While consumers might face an added land transportation cost to depart from Vancouver rather than Seattle, foreign-flag operators would continue to have a significant operating cost advantage over U.S.-flag ships and thus might offer lower prices. The price advantage of the foreign-flag ships is likely to offset the cost disadvantage to consumers of departing from Vancouver. Moreover, according to one industry analyst, the Port of Vancouver might respond to potential competition from the Port of Seattle by lowering its port fees to retain firms operating less costly foreign-flag ships. Hawaii’s long distance from most foreign ports creates an especially attractive opportunity for entry under the U.S.-flag, but potential competitors would have to compete with an established operator, NCL, for unknown demand. In addition to NCL’s ability to offer wholly domestic cruises in Hawaii with the exemption, it has had an exclusive arrangement for its ships to stop at Fanning Island, the closest foreign port to Hawaii. With this exclusive agreement NCL has been able to garner the largest market share of the Hawaiian trade. NCL intends to run three U.S.-flag ships and one foreign-flag ship regularly in Hawaiian itineraries. As noted previously, some industry analysts do not think consumers in the Hawaiian market can support NCL’s capacity increase; therefore, success might be difficult for any additional companies entering the market. In fact, one cruise line we spoke with is uncertain about continued operations, given the sales performance of its initial entry into the Hawaiian market. Finally, while short 3- or 4-day cruises along the east or west coasts of the United States may hold some attraction for would-be entrants, these cruises could still face lower cost competition from foreign vessels offering similar itineraries with a foreign port included. In addition, while there are some smaller U.S. passenger vessels offering short coastal cruises, the potential demand for these cruises may not be substantial enough to sustain large cruise ships. Granting other cruise lines exemptions to the U.S.-built requirement without strict tonnage requirements could negatively affect the U.S. shipbuilding industry. If exemptions were granted only for large, overnight cruise vessels, the U.S. shipbuilding industry would face little, if any, impact given that no such ship has been completed in the United States since 1958. However, if the exemptions were broader, including small passenger ships, U.S.-flag operators of small cruise ships might purchase less expensive ships from foreign shipyards, exposing U.S. shipyards to foreign competition that is not subject to the same laws, regulations, and taxes. Another potential adverse effect of similar exemptions is the shift of passengers away from small U.S.-flag cruise lines to domestic cruises on larger U.S.-flag ships built in foreign shipyards. Small U.S.-flag vessels are built in the United States and operate under all U.S. laws. A major shift in their customer base could disrupt this segment of the cruise industry and negatively affect the shipyards that build these small vessels. However, industry analysts suggest that there is a very small likelihood that similar exemptions would affect the small cruise vessels because they serve different segments of the market. Small vessel operators view their products as boutique cruises, as compared to mass-market cruises on large vessels. These boutique cruises are often shorter voyages, including calls in small ports that large cruise ships cannot access due to their size. We provided the Departments of Homeland Security and Transportation with draft copies of this report for their review and comment. Both departments generally agreed with the findings in the report and provided technical clarifications, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees and to the Secretaries and other appropriate officials of the Departments of Homeland Security and Transportation. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at [email protected] or at (202) 512-2834. Additional GAO contacts and acknowledgments are listed in appendix III. To address the original intent of the Passenger Vessel Services Act (PVSA) and how pertinent rulings and decisions have affected the implementation of the law, we reviewed the PVSA, its amendments, and its administrative, legislative and judicial history. We also reviewed several listings in the Customs Rulings Online Search System to see how the PVSA is currently interpreted and we conducted interviews with officials from Customs and Border Protection (CBP) and the Coast Guard responsible for documentation of U.S. vessels and for enforcing the provisions of the PVSA, U.S. vessel documentation laws, and the Jones Act. To ascertain how the exemption provided to NCL might affect future rulings or interpretations on the PVSA, U.S. vessel documentation laws, or the Jones Act, we researched the legislative history of the PVSA, its prior amendments and exemptions, and pertinent CBP rulings to determine what impact they had on future rulings regarding the PVSA or the Jones Act. We also reviewed rulings regarding the PVSA to determine if any amendments of exemptions provided for under the Jones Act had any impact on them. Finally, we conducted interviews with agency officials about the implementation of maritime laws. To determine the potential effects of the exemption on competition in the passenger cruise industry, entry into the U.S. domestic market, the exemption’s broader economic effects, as well as the potential effects of granting similar exemptions, we reviewed studies on the economic impact of the cruise industry and competition in the industry and conducted interviews with officials from several cruise lines, industry associations, and a full range of cruise industry stakeholders, analysts, and experts. To understand the nature of competition in the industry, we reviewed a merger analysis conducted by the Federal Trade Commission (FTC) in 2002 that examined, in-depth, competitive conditions in the North American cruise industry. We also interviewed officials and reviewed internal documents from cruise lines, including Norwegian Cruise Line, Carnival Cruise Lines, Royal Caribbean Cruise Lines, Radisson Seven Seas Cruises, Crystal Cruises, the former American Classic Voyages, and CruiseWest to get their perspectives on the nature of competition in the industry, the effects of the exemption on competition, and the potential of various domestic itineraries. We also spoke with several port authorities, individual U.S. shipyards, and industry financial analysts for further information on the broader economic effects of the exemption and the potential effects of granting similar exemptions. In addition, we gathered information on the capital and operating costs of foreign-flag vessels as compared with U.S.- flag vessels. Since most of these data are proprietary, we were unable to independently verify them because we have no authority to require access to the underlying data. However, we applied logical tests to the data and found no obvious errors of completion or accuracy. Along with our use of corroborating evidence, we believe that the data were sufficiently reliable for our use. To analyze the effects of the exemption on the potential for entry into the U.S. domestic market, we spoke with industry financial analysts and experts, including officials at American Marine Advisors, G.P. Wild, and J.P. Morgan Chase to obtain perspectives on whether financing for a U.S. built vessel would be more difficult to obtain now that the exemption has been granted. We also spoke with officials within the Maritime Administration to ascertain whether applications or approvals for federal loan guarantees for building large passenger vessels had waned or would be more difficult to obtain as a result of the exemption. We also spoke with officials from the cruise lines and an official representing smaller U.S.-flag vessel operators to get their perspectives on the potential for entry into the U.S. domestic cruise market. To determine the extent of NCL’s capital cost advantage under the exemption, we obtained estimates of the final cost to build the first of the exempted vessels from the General Disclosure statement under the Stock Exchange of Hong Kong of Star Cruises Limited, NCL’s parent company. We were unable to independently verify these costs because we have no authority to require access to the underlying data. However, we confirmed the accuracy of these figures with officials within NCL and through comparing the figures to publicly available data on the costs of vessels of similar size completed for other cruise lines. We then compared these costs to the original project costs to build the Project America vessels in a U.S. shipyard. We converted all figures to 2003 dollars using the producer price index for ship and boat building and repairing prepared by the Bureau of Labor Statistics. We also obtained additional perspectives on the potential economic effects of the exemption and of possible additional exemptions from various industry associations, including the International Council of Cruise Lines, Cruise Lines International Association, the Passenger Vessel Association, the American Shipbuilding Association, and the American Society of Travel Agents, as well as officials from the Maritime Cabotage Task Force, the Maritime Trades Departments of the AFL-CIO, American Maritime Officers, and the Seafarers International Union. We conducted our work from August 2003 through February 2004 in accordance with generally accepted government auditing standards. Since NCL’s vessels will be undertaking domestic travel under the U.S. flag, NCL will subject itself to numerous other U.S. laws in the areas of tax, labor, immigration, environment and the Americans with Disabilities Act. These U.S. laws do not usually apply to foreign-flag cruise lines because their itineraries are in international waters, either because they include a distant foreign port if they are traveling between U.S. ports, or a nearby foreign port if the voyage is a round trip from one U.S. port, and thus international rather than U.S. laws apply. Because NCL’s U.S.-flag Hawaiian operations—operated by its U.S. subsidiary NCL America—will be involved in domestic trade, income derived from those operations would be taxable under the U.S. tax code. The Internal Revenue Code has special rules for “transportation income.” If the transportation income is attributable to transportation that begins and ends in the United States, it is treated as income derived from sources in the United States and therefore fully taxable. If the transportation begins or ends in the United States, but not both, 50 percent of the transportation income is treated as income derived from sources in the United States. However, the Internal Revenue Code, under 26 U.S.C. 883, also excludes from the gross income of foreign corporations income derived from the international operation of vessels if their home countries grant an equivalent exemption from paying taxes to U.S. corporations. Therefore, the income earned from foreign-flag vessels operated by foreign corporations operating cruises in the United States may not be subject to U.S. corporate income tax. If NCL operates vessels in domestic trade, those vessels will become subject to U.S. labor and documentation laws, which, among other things, require that the officers and unlicensed seamen on a U.S.-flag ship to be U.S. citizens or documented aliens with permanent residence in the United States, and that the crew be subject to minimum wage and collective bargaining laws. U.S. documentation laws under 46 U.S.C. 8103(a) require that only U.S. citizens serve as the master, chief engineer, radio officer, and officer in charge on a U.S. documented vessel. Also, each unlicensed seamen must be a citizen of the United States except that not more than 25 percent of that number may be aliens lawfully admitted to the United States for permanent residence. Under the Fair Labor Standards Act, minimum wage laws would apply to the crew, and they would be allowed to engage in collective bargaining under the National Labor Relations Act. In addition, where applicable, higher state minimum wage laws would apply. For example, U.S.-flag interisland Hawaii cruise operations will be subject to the state’s $6.25/hour minimum wage, which is $1.10 higher than under federal law. In addition, crewmembers on U.S.-flag vessels are subject to tax at the federal, state, and local levels. NCL’s U.S.-flag ships will have to adhere to U.S. Coast Guard-approved vessel construction and safety standards. As a general rule, foreign vessels operating in U.S. waters need only comply with international construction and safety standards, as opposed to the often more rigorous U.S. standards. An international treaty, the Safety of Life at Sea Convention sets forth international construction and inspection standards. A foreign vessel from a country that is a signatory to the Convention, would be subject to U.S. inspection only as to the vessel’s propulsion and lifesaving equipment. Finally, according to several industry experts and representatives, the application of the American’s with Disabilities Act could have significant cost implications for vessels operating in the U.S. domestic trade because of requirements to make the vessels handicap accessible. However, NCL executives stated that these requirements would not add significant costs to their ships, because even their foreign-flag ships adhere to high standards in this regard. The GAO staff that worked on this report dedicate it to their late colleague, Ryan Petitte, in recognition of the valuable contributions he made. Other key contributors include Jay Cherlow, Michelle Dresben, Sarah Eckenrod, Colin Fallon, David Hooper, Ron Stouffer, and Andrew Von Ah. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
No large U.S.-flagged cruise ships (ships registered in the U.S. that are U.S.-built, U.S.-owned, and U.S. crewed) are in operation. Foreignflagged vessels cruising to foreign ports serve most of the U.S. demand for cruises. However, Norwegian Cruise Line (NCL) recently obtained an exemption from U.S. maritime law to operate three foreign-built ships under the U.S. flag in Hawaii. Cruise lines and others have raised concerns over the advantage the exemption might confer to NCL, since foreign-flagged competitors are unable to offer the same itineraries due to the Passenger Vessel Services Act (PVSA), which prevents foreign vessels from transporting passengers solely between U.S. ports. Concerns have also been raised over the effect this exemption might have on future attempts to grow the U.S.-flag cruise vessel fleet, since potential U.S.-flag competitors would need to build ships in the United States, presumably at higher cost. GAO was asked to (1) review the original intent of the PVSA and rulings and decisions regarding it, (2) determine if the exemption will affect the implementation of the PVSA or other maritime laws, (3) assess the potential effects of the exemption on competition and entry into the U.S. domestic cruise market, and (4) assess the potential economic effects of granting other cruise lines similar exemptions. The Departments of Homeland Security and Transportation generally agreed with the findings in this report. The original intent of the PVSA, enacted in 1886, was to protect the U.S. maritime industry from foreign competition by penalizing foreign vessels that transport passengers solely between U.S. ports. However, several rulings and decisions interpreting the PVSA have allowed itineraries for foreign cruise vessels between U.S. ports that were previously restricted. For example, voyages by foreign vessels between two U.S. ports that include a distant foreign port, and round trip voyages from U.S. ports that include a nearby foreign port and other U.S. ports, do not violate the PVSA. NCL's exemption will likely have little impact on how the PVSA or other maritime laws are administered or interpreted because it is specific to three NCL vessels and cannot be applied to any other vessels in any other areas. The exemption effectively gives NCL a monopoly on interisland Hawaiian cruises--providing consumers with itineraries that were previously unavailable. However, NCL will likely have little power to raise prices on these itineraries because of competition from other vacation options. Because NCL is able to operate foreign-built ships in Hawaii, the exemption provides an additional obstacle for any potential U.S.-flag competitor to enter that market, since that competitor would need to build the ship in the United States at a higher cost. However, independent of the exemption, there were and still are other substantial obstacles for any potential U.S.-flag cruise vessel due to the higher capital and operating costs (e.g., labor costs) associated with the U.S. flag, as compared with existing foreign-flag cruise vessels offering itineraries through a foreign port. Granting additional exemptions to ease entry into the domestic trade could lead to benefits for port cities, U.S. seamen, and consumers; however, it is unclear how many cruise lines would choose to enter even if they were permitted to operate foreign-built ships under the U.S. flag, because of the higher operating costs associated with a U.S.-flag carrier operating in domestic itineraries and because of uncertain market conditions.
Effective tax rates on corporate income can be defined in a variety of ways, each of which provides insights into a different issue. These rates fall into two broad categories—average rates and marginal rates. An average effective tax rate, computed as the ratio of taxes paid in a given year over all of the income the corporation earned that year, is a good summary of the corporation’s overall tax burden during that particular period. In comparison, a marginal effective tax rate focuses on the tax burden associated with a specific type of investment (usually over the full life of that investment) and is a better measure of the effects that taxes have on incentives to invest. There is likely to be some correlation between average effective tax rates, marginal effective tax rates, and statutory tax rates across countries. In the remainder of the report, unless we specify otherwise, we use the term effective tax rate to mean an average effective tax rate. Important methodological decisions to make when computing effective tax rates on corporate income are the scope of the corporate taxpayer to study and what measures of taxes and income to use. These decisions are ultimately driven by both conceptual considerations and data availability. These considerations will be different, depending on whether one is estimating separate effective tax rates on domestic income and foreign income or simply a single effective tax rate on worldwide income. Our various estimates and those of others that we present below are based on the same fundamental definition of an average effective tax rate but reflect variations in scope and data as appropriate for the different populations being examined. Large U.S. corporate taxpayers are often complicated groups of separate legal entities. A parent corporation may directly own (either wholly or partially) multiple subsidiary corporations. In turn, these subsidiaries may own other corporate subsidiaries, and any of these corporations may own stakes in partnerships. A domestic parent corporation (one that is organized under U.S. laws) may head a large group of affiliated businesses that includes both domestic and foreign subsidiaries and partnerships. The timing of when these various entities pay U.S. tax on their income and the tax return on which their income and taxes are reported varies depending on both the location of the entities and choices made by the parent corporation. These timing and reporting differences, which are summarized in table 1 and table 2, matter in the estimation of effective tax rates. In particular, the fact that the income of a controlled foreign corporation (CFC) is not reported or taxed on a U.S. return until it is recognized under Subpart F or repatriated in the form of dividends means that an effective tax rate estimate based solely on income reported for tax purposes would not reflect the tax treatment of a significant component of the income of MNCs. This limitation is one reason why prior analysts have used income reported on financial statements, rather than tax-reportable income, when computing effective tax rates. Two aspects of the U.S. tax treatment of foreign income lead to much lower U.S. tax burdens on foreign income than on domestic income, which is one reason why it makes sense to look at these effective tax rates separately. The first aspect is the aforementioned deferral of tax on the income of CFCs generally until that income is repatriated. The second aspect is the foreign tax credit, which is designed to prevent the double taxation of foreign income (once by the government of the country in which the income is earned and once by the United States). In effect, the United States taxes the foreign income only to the extent that the U.S. corporate tax rate exceeds the foreign rate of tax on that income. If the foreign rate of tax is equal to or exceeds the U.S. rate, the United States collects no tax on that income. Department of the Treasury tax regulations generally effective since January 1, 1997 have an important influence on some of the effective tax rate estimates and data on business activity location that we present below. These regulations, commonly known as check-the-box rules, permit corporate groups to treat a wholly owned entity either as a separate corporation or to “disregard” it as an unincorporated branch simply by checking a box on a tax form. Taxpayers have used this flexibility to create “hybrid entities,” which are business operations treated as corporations by one country’s tax authority and as unincorporated branch operations by another’s. Hybrid entities can be used in a variety of ways for tax-planning purposes. In one example, a U.S. MNC can put substantial equity into a finance subsidiary located in a low-tax country. That subsidiary then can lend money to an affiliate in a high-tax country to finance most of the latter’s operations. The high-tax affiliate makes tax- deductible interest payments to the finance subsidiary, which will pay a low rate of tax on this interest income. Prior to the check-the-box rules the interest income of the finance subsidiary would have been subject to U.S. tax on a current basis under the subpart F rules. Now, however, the taxpayer can, in certain circumstances, treat the high-tax affiliate as an unincorporated branch of the low-tax subsidiary, so the interest payment is not recognized as a transaction for U.S. tax purposes. Subject to Subpart F, the United States only taxes that income if it is repatriated. The American Jobs Creation Act of 2004 provided a temporary incentive for U.S. MNCs to repatriate income from their CFCs. The act allowed recipients to make a special, one-time election to deduct 85 percent of “extraordinary” dividends received from CFCs during either the recipient’s last tax year beginning before October 22, 2004, or its first tax year beginning after that date, provided that the CFCs’ dividends were not funded by money borrowed from their U.S. shareholders and provided that the repatriated funds were used for allowable domestic investments. Dividends were extraordinary to the extent that they exceeded the average dividends that the shareholder received from its CFCs over the previous 5 years (disregarding the highest and lowest amounts out of those 5 years). IRS tracked the amount of qualified dividends repatriated under this provision and found that 843 corporate owners of CFCs reported the receipt of $312.3 billion in qualified dividends from tax years 2004 through 2006. Only $9.1 billion of this total was repatriated during tax year 2004, the year on which most of our data analyses are based. At various points below we discuss how this tax provision may make some of our specific results for 2004 differ from those of surrounding years. Publicly traded corporations are required to produce financial statements according to guidelines established by the Financial Accounting Standards Board. The income reporting in these financial statements (commonly known as book income) differs in important ways from the income that the corporations report on their federal tax returns. One key difference is that book income will include a parent corporations’ share (in proportion to its ownership share) of all of the income of all subsidiaries, both domestic and foreign, in which it has at least a 20 percent ownership stake. Other differences arise because income reported for tax purposes reflects the effects of various incentives and disincentives embedded in the tax code (such as accelerated depreciation to encourage investment and limits on deductible compensation to discourage excessive payments). In the early 1980s, the Joint Committee on Taxation developed an approach for using book income and taxes to estimate effective tax rates of foreign taxes on foreign-source income, U.S. taxes on domestic income, and worldwide tax on worldwide income. A limitation of this approach was that the book measures of taxes did not allow a distinction between U.S. taxes paid on domestic income and the U.S. residual tax on foreign- source income. This limitation can be overcome by using data from Schedule M-3 of the federal tax return, which just recently became available to researchers. Beginning with tax year 2004, U.S. domestic corporations with assets of $10 million or more are required to include the Schedule M-3 in their tax returns. This schedule requires taxpayers to provide a more detailed reconciliation of their book income and their tax income than what was required in earlier years. Data from the Schedule M-3 allow for the computation of effective tax rates, with some limitations, that use book measures of income and taxes actually reported on returns. As a result, one can take advantage of the broader scope of foreign-source income reported in financial statements and the more detailed information on taxes paid, which permits a separation of U.S. taxes paid on domestic and foreign income. However, some data limitations remain (these are discussed in detail in app. I). The most significant limitation is that the data do not permit a comprehensive measurement of foreign income without some double counting of income. This limitation is best addressed by estimating a range of effective tax rates for foreign income using alternative measures of income. The most inclusive measure is likely to contain some double counting and, therefore, cause an understatement of the effective rate. The least inclusive measure avoids double counting but will leave out some income that should be included, causing an overstatement of the effective rate. The true effective tax rate should be between the upper and lower bound of this range. The weighted average U.S. effective tax rate on the domestic income of large corporations with positive domestic income in 2004 was 25.2 percent, while the median effective tax rate for this population of corporations was 31.8 percent. However, as figure 1 shows, under these two summary measures there was considerable variation in effective tax rates across taxpayers. At one extreme, 32.9 percent of the taxpayers, accounting for 37.5 percent of income, had average effective tax rates of 10 percent or less; at the other extreme, 25.6 percent of the taxpayers, accounting for 14.8 percent of income, had effective tax rate over 50 percent. The average effective tax rates for the remainder of the taxpayers were fairly evenly distributed between these two extremes. In order to address limitations in the available income data, we estimated the residual U.S. average effective tax rate on foreign-source income using three alternative income measures. Our estimates of the weighted average effective tax rates for large taxpayers with positive foreign income ranged from 3.9 percent to 4.2 percent, depending on which income measure we used. The true weighted average should fall somewhere within this range. The residual U.S. average effective tax rates on foreign income are very low for a combination of reasons that make this measure conceptually quite different from our effective tax rate on domestic income. First, in cases where a U.S. MNC has paid foreign income taxes at a rate that is close or equal to the U.S. tax rate, the U.S. foreign tax credit eliminates most or all of the U.S. tax liability on that corporation’s foreign-source income. Second, in many cases a substantial portion of the foreign-source income earned by U.S. MNCs is not taxed until it is repatriated to the United States. The denominator of our tax rate reflects all of the foreign income that was earned in 2004, but the numerator includes only taxes that were actually paid in 2004. Consequently, the numerator does not include any tax on nonrepatriated 2004 income; however, it does include tax on repatriated dividends paid out of income that CFCs earned prior to 2004. It is important to recognize that tax deferral does not necessarily mean that the tax will never be paid. Figure 2 presents estimates for the distribution of effective tax rates that are based on our broadest income measure. The distributions of effective tax rates based on our other income measures did not look dramatically different. Approximately 80 percent of the large taxpayers with positive foreign income, accounting for about 30 percent of that population’s total foreign income, paid no federal income tax on that income. An additional 8.5 percent of this population, accounting for about 52 percent of the foreign income, had positive average effective U.S. tax rates of 5 percent or less. Less than 10 percent of this population had effective tax rates over 10 percent. The taxpayers with the higher effective rates may have had relatively high ratios of repatriations over current-year income from their CFCs, or the dividends that they repatriated may have been paid out of income earned in relatively low-tax locations. Due to the incentives under the American Jobs Creation Act of 2004, the ratio of repatriations to CFC income may have been different in 2004 than it was in surrounding years. Some U.S. MNCs may have delayed repatriations in the year or two prior to the year in which the made a one- time “extraordinary” dividend payment, so that their repatriations first were lower than normal, then became higher than normal. The timing of this behavior could have varied across firms, depending on when their management became sufficiently confident that the tax preference would be enacted, the timing of their tax years, and other factors. The IRS data on repatriated income presented earlier suggest that the 2004 ratio of repatriations is likely to be lower than the ratio for 2005 and, perhaps, 2006. The effects of these differences on the average effective rates of tax on foreign-source income in all of those years are uncertain. On the one hand, a higher rate of repatriation would mean that more of the CFCs’ income would become subject to U.S. taxation in that year; on the other hand, the temporary deduction would effectively exclude 85 percent of the repatriations from U.S. taxable income. We estimated the effect of federal income tax credits (other than the foreign tax credit) on U.S. average effective tax rates by computing rates before and after the inclusion of the credits. We found that these credits reduced the precredit tax liabilities on domestic income by a weighted average 1.7 percentage points (from 26.9 percent to 25.2 percent). We also found that tax credits reduced the precredit tax liabilities on foreign- source income by a weighted average 0.8 percentage points. These estimates indicate the extent to which tax preferences in the form of tax credits reduce corporate tax burdens. We have no way to precisely measure the effects of other forms of tax preferences, such as exemptions or accelerated depreciation. These other forms of preferences explain some of the differences between the precredit effective tax rates shown in figure 1 and the 35 percent statutory rate; however, there are differences between book and tax income that are not tax preferences that also explain some of the differences. The U.S. average effective tax rates that we presented above do not reflect the taxes that U.S. businesses pay on their foreign-source income to foreign governments. The effective rates of foreign tax are likely to be one of several factors that influence the specific location of U.S. business activity abroad. Economists have used different approaches to estimate these effective foreign taxes. Each of these approaches has limitations; however, when used in combination, these approaches provide broadly consistent effective tax rate rankings for many important locations of U.S. business activity. One estimation approach used by researchers with access to IRS tax data has been to compute effective rates of tax paid by U.S. CFCs as the ratio of the total income taxes that a CFC pays on its worldwide income, divided by that worldwide income. The income from CFCs represents a significant component of U.S. businesses’ foreign-source income. We used IRS data on CFCs for 2004 to estimate that the average combined (U.S. and foreign) effective tax rate on the worldwide income of CFCs (excluding those that had negative income) was 16.1 percent. One limitation of this estimation approach is that when aggregated, the CFC income data double counts income earned by lower-tier CFCs that is distributed to higher-tier CFCs in the form of dividends. We computed a separate effective tax rate for manufacturing CFCs only, which exclude holding companies that may be used to accumulate income from lower-tier CFCs. We found that the rate for manufacturing CFCs, at 15.4 percent, was actually lower than the rate for all CFCs. One possible explanation for this result is that, if U.S. MNCs do route substantial amounts of dividends to holding company CFCs, the dividend-paying businesses may be hybrid entities that are disregarded for U.S. tax-reporting purposes rather than CFCs themselves. That practice would make sense from a tax-planning standpoint. Under such an arrangement, the income of the hybrid entities would not be reported separately in the IRS data we used; it would be counted only once, as part of the income of a higher-tier CFC. Our estimate for the effective rate of tax on manufacturing CFCs is significantly lower than the 21 percent effective rate that Altshuler and Grubert estimated for manufacturing CFCs for tax year 2001. Those authors noted that the effective tax rate has declined steadily from 33 percent in 1980. Our estimate suggests that effective rates may have continued to drop since 2001. This decline predominantly represents a reduction in the amount of tax paid to foreign governments, not to the United States. Altshuler and Grubert conclude that a significant portion of the effective tax rate reduction may be attributable to the increased tax- planning flexibility that U.S. MNCs have enjoyed since the introduction of the check-the-box rules. Oosterhuis (2006) points to Altshuler and Grubert’s recent estimates as evidence of how the check-the-box rules have enabled U.S. MNCs to reduce their payments of foreign taxes. Oosterhuis notes that, although a reduction in foreign taxes may make U.S. MNCs more competitive overseas against foreign MNCs, it also makes foreign investment by U.S. MNCs more attractive relative to investment in the United States. Another approach for estimating the effective tax rate on the foreign- source income of U.S. businesses is to use BEA’s data on the operations of U.S. MNCs, which includes the amount of net income earned and foreign taxes paid by foreign affiliates of these MNCs. In the case of U.S. majority-owned foreign affiliates, the BEA data permit one to compute net income with and without equity income. The latter measure of income eliminates some important forms of double counting (discussed below). An unavoidable limitation of BEA’s foreign affiliate income measure for the purposes of estimating effective tax rates is that it includes negative values for affiliates that incur losses. As a consequence, when the income data are aggregated at the country level or for the full population, the net value will be lower than the aggregate income of just those businesses that are profitable. In the absence of any offsetting factors, effective tax rates that have this income measure as the denominator will overstate the rates that profitable businesses pay. Using data from BEA’s 2004 benchmark survey, we estimate that the average effective tax rate on foreign affiliates was 28.7 percent, significantly higher than our estimate based on CFC data. Although the CFC data may be preferable to the BEA data for estimating an overall average effective tax rate for the foreign operations of U.S. MNCs, the former data provide an imperfect basis for estimating average effective tax rates for specific countries. Although the CFC data can be aggregated by principal place of business, the allocation of income and taxes paid by principal place of business is not perfectly correlated with where the income and taxes of the CFCs are actually earned and paid because some CFCs earn income and pay taxes in multiple locations. The growing use of hybrid entities has likely reduced this correlation, particularly for CFCs located in countries that are favored locations for accumulating income. Some hybrids may formerly have been CFCs with separate U.S. tax filing requirements that indicated where their principal operations were located. Now, as hybrids, their income and tax data would not be separated from that of the CFCs into which they have become absorbed for U.S. tax-reporting purposes. Consequently, the data for those hybrids are now associated with the country where the CFC has its principal operations, rather than where the hybrid has its own operations. In contrast, the BEA data treat the disregarded hybrid entities as separate affiliates, and their data are associated with the countries where their physical assets are located or where their primary activities are carried out. An important exception to this general treatment applies in the case of holding companies. When a corporation has physical assets or operations in multiple foreign countries, it is classified as a holding company and the assets assigned to its country of incorporation include the equity that it holds in the operations in the other countries. Those outside operations are reported as separate foreign affiliates, so when the BEA data are aggregated there is some double counting of assets. Figure 3 compares the three effective tax rates we estimated for 17 of the most important foreign locations of U.S. MNC operations, based on their shares of various measures of U.S. business activity. In most cases the effective tax rates based on BEA data are higher than those based on either set of CFC data. Despite the variation in results from the three different measures, one subset of countries (shown in the top panel) can be identified as having relatively low effective rates of tax on the U.S. business operations located there. Similarly, a subset of countries (shown in the middle panel) has relatively high rates (over 18 percent) by any of the three measures. Of the remaining four countries, Australia is near the boundary between high and low effective rates by all three measures, the Netherlands and the United Kingdom are shown to have low effective tax rates according to the CFC data but high rates according to the BEA data, and Luxembourg appears to have very low overall effective tax rates, but not for manufacturing CFCs. Later in the report we show how the distribution of activity by foreign affiliates of U.S. MNCs differs across these three groups of countries. We were not able to disaggregate the worldwide income of U.S. corporate taxpayers by character of income with the data that were available. However, we were able to present such a disaggregation for an important form of income: the foreign-source income that was subject to the federal income tax (prior to the application of foreign tax credits) in 2004. Figure 4 shows that no single form of income predominates. “Grossed-up” dividend income, the largest type of income, accounted for 24.6 percent of this foreign-source income. The next most important type of income (that could be broken out separately) was that from foreign branch operations (direct foreign operations of U.S.-based corporations that were not established as separate legal entities) with a 20.2 percent share, followed by rents, royalties, and license fees with a 16.5 percent share. The various estimates of effective rates of tax that we have presented up to this point have covered only U.S. businesses (those that are incorporated in the United States or whose parent corporations are). We reviewed the relevant economic literature to determine what information is available about effective tax rates imposed on all corporations based in specific foreign countries. We identified four studies that used corporations’ financial statement information to compare the average effective tax rates corporations pay across multiple foreign countries. The studies we identified estimated rates of total worldwide taxes paid on total worldwide income for corporations based in countries in the European Union and in Canada, the United States, Japan, and Australia. The two studies that covered corporations based in the European Union during the 1990s reported similar rankings of countries by average effective tax rates, although exact estimates varied across alternative measures using different measures of income (see fig. 8 in app. IV). Ireland and Austria had the lowest rates at around 20 percent or less, while Italy and Germany, with rates over 35 percent, had the highest. The two other studies, which covered limited selections of countries, suggested that effective tax rates in the United States, the United Kingdom, Germany, France and Australia were within 5 percentage points of each other, while Canada had a significantly lower rate and Japan a significantly higher rate. A comparison of the country rankings based on these estimated effective tax rates for all corporations and the rankings based on our estimates of effective rates for U.S. CFCs and other foreign affiliates of U.S. MNCs reveals both consistencies (low rates for Ireland and high rates for Italy and Japan) and inconsistencies (in the cases of Netherlands and the United Kingdom). Business activity can be measured in a variety of ways and the location of these activities can be influenced by numerous factors, with certain factors having greater influence on some activities than on others. For example, taxes, wage rates, the availability of skilled labor, and proximity to natural resources or to final product markets can all influence where businesses decide to locate production facilities; however, wage rates are likely to be particularly important for the location of low-skilled, labor- intensive operations, while access to a highly educated workforce may have greater influence on the location of scientific research activities. Tax regimes—both those of the United States and of foreign countries—will have some influence over where business activity is actually located; however, they also provide some incentive for businesses to report net income as coming from locations other than where factors of production, such as labor and physical capital, actually generated the income. This shifting of income may be reflected in income data BEA and other agencies gather from businesses as well as in data on related items, such as sales and value added. In contrast, measures such as physical assets, employment, and compensation are less likely to be debatably sourced because of tax considerations. These practices make it difficult to determine the extent to which the distribution of some of the business activities that we present below reflects the actual, as opposed to just the reported, location of the activities. Figure 5 shows the trends across the last four BEA benchmark studies of U.S. MNC operations (1989–2004) for six key measures of business activity: value added, sales, physical assets, compensation of employees, number of employees, and pretax income excluding income from equity investments. Each bar in the graph shows how the aggregate amount of a particular activity was divided between operations of U.S. parent corporations (including any of their domestic subsidiaries) and the operations of the majority-owned foreign affiliates of those parent corporations. Business activity by all measures increased in absolute terms both domestically and abroad during this period, but the relative share of activity that was based in foreign affiliates increased. Nevertheless, as of 2004, over 60 percent of the activity (by all six measures) of U.S. MNCs remained located in the United States. Figure 6 compares the division of activity between U.S. and foreign operations across the three largest industries—manufacturing, finance and insurance (excluding depository institutions), and wholesale trade. The height of each bar in the figure represents the industry’s share of total worldwide activity of U.S. MNCs. The division of each bar indicates how that particular measure for the industry is divided between U.S. and foreign operations. Manufacturing accounts for the largest share of all six measures of activity. Among these three industries finance and insurance has the lowest share of its activity (by all measures) located abroad, while wholesale trade generally has the largest share (except for physical assets). For example, only 19 percent of employment in finance and insurance was located abroad in 2004, while 36.2 percent of manufacturing employment and 42.9 percent of wholesale employment was located in foreign operations that year. We can track activity by industry consistently back to 1999 only (due to a change in industrial classifications prior to 1999). The most significant difference between these three industries’ shares of overall activity in 1999 and what is shown for 2004 is that manufacturing’s share of total value added, physical assets, and pretax net income (excluding income from equity) all declined by 4 to 5 percentage points during that interval. At the same time, the proportions of manufacturing’s value added, physical assets, and pretax net income that were located abroad increased from an average of about 25 percent to an average of about 30 percent. There were no significant changes in the shares of the finance and insurance industry. The only significant change in the wholesale trade industry is that its share of total pretax net income (excluding income from equity) increased by 6 percentage points from 1999 to 2004. Figure 7 clearly reveals a relationship between effective tax rates and the size of a country’s income shares relative to its shares of the other measures of business activity. The figure shows the share of the various measures of U.S. multinational business activity in 2004 for the 17 important foreign locations that we presented in figure 3. The measures include the five nonincome statistics from the previous figures (shown by the darker bars) plus three measures of net income (shown by the lighter bars). The first two income measures are pretax net income from the BEA data, excluding and including income from equity investments. The third income measure is net earnings and profits from the CFC data. With the exception of China, all of the countries with relatively low effective rates of tax have income shares that are significantly larger than their share of the three measures least likely to be affected by income-shifting practices: physical assets, compensation, and employment. This relationship holds for all three income measures. In contrast, all of the countries with relatively high effective tax rates, except for Japan, have income shares that are smaller than their shares of physical assets, compensation, and employment. Of the four countries with a mix of both high and low estimated effective tax rates, the United Kingdom bears a similarity to the high-tax pattern and Luxembourg to the low-tax pattern, while Australia is balanced across all eight measures. The Netherlands has a balanced pattern when income is measured in terms of the BEA data without equity income; however, it has an extremely large proportion of equity income relative to other types of net income. Luxembourg, the United Kingdom Caribbean Islands (and, to a lesser extent, Bermuda and Switzerland) also have significant shares of income from equity investments. IRS data on dividends repatriated by U.S. MNCs claiming the temporary dividend deduction indicates that the Netherlands, Switzerland, and Bermuda were the three largest sources of such repatriations. Luxembourg and the Cayman Islands were also among the top eight sources (along with Ireland, Canada, and the United Kingdom). Income from equity investments was not prominent in any of the 17 countries in 1989 (see app. III). The growth in this category of income from 1989 through 2004 is consistent with observations made by others that the 1997 check-the-box rules have significantly affected the tax planning of U.S. MNCs. Data are not yet available to show whether this accumulation of equity income in certain countries was largely a temporary phenomenon, leading up to repatriations made from 2004 through 2006. The United Kingdom and Canada dominate all of the measures of activity, except for income. Germany also has at least a 5 percent share of all of the nonincome measures. Mexico, China, and Brazil have employment shares that are disproportionate to their shares of the other activity measures. This fact is not surprising, given that these are the three countries with the lowest wage rates out of the 17 (which is apparent from the relative sizes of their compensation and employment shares). Compared to 1989, the share of U.S. business activity, particularly physical capital, that is located in Canada has declined noticeably. This is also true, to a lesser extent, for Germany. Research and development is one more measure of business activity (not included in figs. 5 through 7 because it is more narrowly focused than the other measures) that is significant. The United Kingdom, which accounted for 20.7 percent of all research and development performed by foreign affiliates of U.S. MNCs, was the primary location for this activity in 2004, followed by Germany (16.2 percent share) and Canada (10.6 percent share). Japan’s share of this research and development activity fell from 12.6 percent in 1989 to 6.3 percent by 2004. Among the countries whose shares increased the most over that period were Sweden (from 0.4 percent to 5.6 percent) and Israel (from 0.4 percent to 3.4 percent). We provided a draft of this report in July 2008 to the Secretary of Treasury for review and comments. Officials from the Department of the Treasury’s Office of Tax Policy provided technical comments, which we incorporated as appropriate. As we agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. We will then send copies to others who are interested and make copies available to others who request them. This report is available at no charge on GAO’s web site at http://www.gao.gov. If you or your staff have any questions on this report, please call me at (202) 512-9110 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. Tax year 2004 was the first year for which corporations had to file the new Schedule M-3. Consequently, there was likely to be a higher rate of taxpayer error in filling out the form than there is for most forms that have been in use for many years. We ran a number of internal consistency checks and, to the extent possible, corrected common errors, guided by the findings of previous researchers. We dropped all cases that had uncorrectable errors in the data elements that were key to our analysis. These exclusions reduced our population of corporations that filed nonblank Schedule M-3s from 34,154 to 28,820. This final population of corporations accounted for 95 percent of the book income of the population of all Schedule M-3 filers. To calculate domestic income we began with the book value of income of the tax includible group and subtracted foreign-source income that is includible. Specifically, our Schedule M-3 domestic income = book income (Schedule M-3, Part I, line 11) –foreign equity method income (Schedule M-3, Part II, line 1) – gross foreign dividends (Schedule M-3, Part II, line 2) – gross foreign distributions (Schedule M-3, Part II, line 5) – domestic equity method (Schedule M-3, Part II, line 6) – minority interest reduction (Schedule M-3, Part II, line 8) – foreign partnership income (Schedule M-3, Part II, line 10). This measure is designed to be closer to a tax consolidated group measure by removing the less than 80 percent owned domestic subsidiaries. It includes the total income of domestic tax consolidated subsidiaries, excludes the income of nonincludible domestic subsidiaries (ownership less than 80 percent), but includes the dividends of nonincludible domestic subsidiaries and partnership income. There are some limitations to this measure of income. While foreign income is excluded through the conversion from the financial consolidated group to the tax consolidated group and the removal of foreign dividends, adjustments made in line 8 in Part I of the Schedule M-3 could result in the improper inclusion of foreign royalties and other foreign payments. In addition, the 2004 Schedule M-3 did not require taxpayers to fill in all of the columns in Part II. Line 10, foreign partnership income and lines 2 and 5, foreign dividends and distributions, are reported both under financial and tax rules but are not listed separately on the Form 1120. We perform a sensitivity analysis by excluding observations that did not complete all columns. (We do the same for our measures of foreign-source income, described below.) The data from the Schedule M-3 does not allow us to derive a comprehensive measure of foreign-source income without double counting certain types of income. For this reason, we provide estimates based on three alternative measures of foreign income. Our estimates based on one of these measures likely overstate the effective tax rate, while estimates based on an alternative measure likely understate the rates. Consequently, our range of estimates represents an upper and lower bound for the true rate. Our broadest measure of foreign income includes the book income of majority owned foreign subsidiaries (reported on lines 5a and 5b in Part I of the Schedule M-3), plus equity-method income from foreign subsidiaries (reported on line 1 in Part II of the Schedule M-3), plus dividends and distributions from foreign subsidiaries (reported on lines 2 and 5 in Part II of the Schedule M-3). The problem with this broad measure is that it likely double counts some income in the aggregate. Lines 5a/b list 100 percent of the income of majority owned foreign subsidiaries, even if the taxpayer filing the Schedule M-3 owns less than 100 percent of the subsidiary. Thus, 5a/b overstates the consolidated group’s share of the income or loss of majority owned foreign subsidiaries. This reporting limitation, by itself, would not be a problem for our aggregate measure of the foreign income of Schedule M-3 filers, except to the extent to which the minority owners of the less-than-100-percent-owned subsidiaries are not Schedule M-3 filers themselves. However, a larger potential overstatement problem arises when we include equity-method income and dividends and distributions in our measure. For example, if a foreign subsidiary is owned 75 percent by one U.S. parent and 25 percent by a second U.S. parent, line 5a/b would provide 100 percent of the income of the foreign subsidiary and line 1 in Part II of the Schedule M-3 providing the equity method income of foreign subsidiaries would add another 25 percent of the income of that subsidiary. Similarly, including the dividends and distributions in lines 2 and 5 in Part II of the Schedule M-3 would double count that income in cases where it is already counted on another Schedule M-3 filer’s line 5a/b or line 1 in Part II of the Schedule M-3. Our second measure of foreign income starts with our broadest measure and then excludes equity-method income. Our third measure excludes both equity-method income and dividends and distributions. In contrast to our broadest measure, our third measure is likely to understate foreign- source income in cases where Schedule M-3 filers share ownership of their less-than-100-percent-owned foreign subsidiaries with majority shareholders other than Schedule M-3 filers. For example, if U.S. Parent A owns 70 percent of foreign subsidiary 1 and U.S. Parent B owns 30 percent of foreign subsidiary 1 and 25 percent of foreign subsidiary 2 and a foreign parent owns 75 percent of foreign subsidiary 2, line 5a/b would provide 100 percent of the income of foreign subsidiary 1, but none of the income of foreign subsidiary 2. In addition, excluding dividends and distributions would exclude any income from less-than-20-percent-owned foreign subsidiaries if those subsidiaries are majority owned by a shareholder other than a Schedule M-3 filer. We compute our various effective rate estimates only for those taxpayers that had positive domestic income, foreign income, or both. Table 3 shows how many taxpayers had positive, negative, or zero values for domestic and foreign income and the aggregate value of that income for our broadest and narrowest measures of income. We computed effective tax rates before credits, after credits, and after credits and other taxes. The tax code does not specify that tax credits (other than the foreign tax credit) be allocated in any particular manner between U.S. tax on domestic income and U.S. tax on foreign-source income. We simply assume that these credits are allocated against U.S. taxes on domestic income and U.S. residual taxes on foreign-source income in proportion to each of those taxes’ share of total U.S. tax. To calculate U.S. taxes on domestic income, we began with regular tax liability and removed the foreign tax credit limit because the latter represents the initial U.S. tax due on foreign-source income before any credits are given for foreign taxes paid. Specifically, U.S. tax on domestic income before credits is calculated as regular tax liability (Form 1120, Schedule J, line 5) – the sum over each income type of foreign tax credit limitation (Form 1118, Schedule B, line 10). Taxpayers are required to file a separate Form 1118 for each category of income, so we added the separate limits from these forms together to obtain the total foreign tax credit limit on repatriated foreign income. This calculation provides the U.S. tax on domestic income regardless of whether the corporation had excess credits because the credit limit is essentially the initial US tax (before foreign tax credit) on foreign-source income. If the corporation has an excess of foreign tax credits, then there is no residual U.S. tax on repatriated foreign income and the U.S. tax on domestic income is found by removing the initial U.S. tax on repatriated foreign income (the credit limit) from the US tax on worldwide income (Form 1120 tax liability without foreign tax credit). If the corporation is below the credit limit, then there is a residual US tax on repatriated foreign income, which would be included separately in the U.S. taxes on foreign-source income measure. In that case the U.S. tax on domestic income is found by removing the initial U.S. tax on repatriated foreign income (the credit limit) from the U.S. tax on worldwide income (Form 1120 tax liability without foreign tax credit). In both cases, the foreign tax credit limit represents the potential tax due on foreign-source income, and by removing it the remaining tax is on domestic income. U.S. residual tax on foreign-source income was calculated as the difference between the foreign tax credit limit and the foreign tax credit (with any negative values treated as zeros). Specifically, it equals the greater of: the sum over the income types of the foreign tax credit limit (line 10 on Form 1118, Schedule B) – foreign tax credit (line 11 on Form 1118, Schedule B) or 0 for each type of income. The U.S. residual tax on foreign-source income is zero if the corporation has paid substantial foreign taxes such that its foreign tax credit limit is binding. For example, if a corporation paid taxes in a single country with a tax rate of 40 percent, the United States would not collect any residual tax on the repatriated income because the taxes paid abroad would be greater than the taxes due in the United States at the corporate rate of 35 percent. The residual tax is positive as long as the corporation’s creditable foreign taxes paid are below the foreign tax credit limit. For example, if a corporation paid taxes abroad at a rate of 10 percent, the United States would tax that income at 35 percent and thus collect a residual tax over the credit for the tax paid abroad. To compute estimates of the domestic effective tax rates on domestic and foreign-source income after credits, we allocated credits according to the income sources’ shares of total tax. Specifically, U.S. tax on domestic income after credits equals the total U.S. domestic tax before credits minus total other credits times the domestic tax share of total U.S. and foreign tax liability before the application of credits. Total other credits equal the total credits (line 7 on the Form 1120, Schedule J) minus the foreign tax credit (line 6a on the Form 1120, Schedule J). Similarly, we also estimated effective tax rates after credits and other taxes by the same formula, substituting total credits and other taxes for total credits. Total credits and other taxes equal regular tax minus final tax liability minus the foreign tax credit (line 5 – line 11 – line 6a on Schedule J of the Form 1120). The credits and other taxes are applied to the final taxes, which include both domestic tax on domestic income and residual domestic tax of repatriated foreign income. We followed the methodology used by Altshuler, Grubert, and Newlon (1998) and Altshuler and Grubert (2006) to estimate average effective tax rates using data from Internal Revenue Service’s Statistics of Income Division’s Form 5471 study for 2004. SOI’s 2004 CFC study changed from a defined population study (7,500 largest CFCS of the largest parent corporations) to a sample of CFCs that included all Form 5471’s filed by all corporations in the SOI corporate study. We restrict our sample to CFCs associated with U.S. corporations sampled at 100 percent. The effective rate was computed as the income taxes paid (line 8 on Form 5471, Schedule E) divided by pretax earnings and profits. Pretax earnings and profits were calculated as final earnings and profits on line 5d of Form 5471, Schedule H plus the total income taxes paid (line 8 on Form 5471, Schedule E). We restricted our analysis to CFCs with positive pretax earnings and profits and nonnegative foreign taxes paid. We computed the effective tax rates by primary place of business, as reported by the CFCs, by aggregating the taxes paid and positive earnings for all CFCs reporting the same principal place of business and then taking the ratio. Bureau of Economic Analysis (BEA) data provide a wide array of data items on multinational corporations (MNC) cross-classified by country and industry. The financial and operating data are collected by BEA in two types of surveys—benchmark and annual, authorized by a law known as the International Investment and Trade in Services Survey Act. On both surveys, the data are collected at the enterprise, or company, level and are classified according to the primary industry of the enterprise. The annual survey estimates are a collection of sample data reported to BEA on U.S. direct investment abroad in the annual survey and the estimates of affiliates that were not in the sample. The sample is a cutoff sample, with reporting thresholds significantly higher than those on the benchmark surveys. To obtain universe estimates of the overall operations of parents and affiliates for nonbenchmark years, data reported in the benchmark surveys for nonsample companies are extrapolated forward, based on the movement of the sample data in the annual surveys. We relied on the BEA benchmark surveys, which are conducted every 5 fiscal years because the universe in the benchmark surveys did not pose the sample limitations of the annual surveys. Selected tables from the final 2004 benchmark survey results, including the tables needed for the charts in this report, are available on the BEA Web site under Operations of Multinational Companies, U.S. Direct Investment Abroad, Financial and Operating Data, Selected Tables, IID Product Guide, Revised 2004 Estimates. Final benchmark survey data results are available for all previous years. The benchmark surveys covered every U.S. person who had a foreign affiliate—that is, who had direct or indirect ownership or control of 10 percent or more of the voting securities of an incorporated foreign business enterprise or an equivalent interest in an unincorporated foreign business enterprise—any time during its reporting fiscal year. A completed benchmark survey form was required for affiliates that had total assets, sales, or net income (or losses) greater than a minimum set value per reporting year, so the trend data we present refer to information on U.S. businesses that met the reporting requirement. Data on all of the benchmark surveys were required to be reported as they would have been for stockholders’ reports rather than for tax or other purposes. Thus, U.S. generally accepted accounting principles were followed unless otherwise indicated by the survey instructions. The 1999 benchmark survey marks the first year that annual and benchmark survey data on U.S. direct investment abroad have classified industries using BEA’s International Survey Industry (ISI) classification system that is based on the 1997 North American Industry Classification System (NAICS). Therefore, trend analysis by industry is not comparable before and after this change. Our ability to provide details of worldwide activity by country and industry were limited by BEA’s suppression of aggregate data when they represented a small number of corporations that accounted for a relatively large portion of the aggregate total. Under the International Investment and Trade in Services Survey Act, the direct investment data collected by BEA are confidential. We contacted BEA to ensure that the data collection encompassed the universe of worldwide activity of U.S. companies and their foreign affiliates. BEA’s methodology for benchmark survey results notes that because of limited resources, BEA’s efforts to ensure compliance with reporting requirements focused mainly on large parents and affiliates. Some parents of small affiliates that were not aware of the reporting requirements and were not on BEA’s mailing list may not have filed reports. BEA believes that the omission of these parents and their affiliates probably has not significantly affected the aggregate values of the various data items collected but would have caused an unknown, but possibly significant, understatement of the number of parents or affiliates. Pretx income (no equity) We identified four studies that used corporations’ financial statements to compare the average effective tax rates of corporations across multiple foreign countries. All of these studies produced estimates for multiyear periods during the 1990s. There is considerable overlap in the methodologies across the four studies; however, there are some variations in the measures of effective tax rate used, even within some of the studies. Buijink, Janssen, and Schols (2000) and Gorter and de Mooij (2001) both use consolidated financial statements from the Worldscope financial statement database to estimate effective tax rates for countries in the European Union. Buijink, et al. use two different measures: the first is a simple ratio of income taxes paid over pretax book income (before equity income, minority interest income, and extraordinary income); in their second measure they adjust income taxes for the net change in deferred taxes. Gorter and de Mooij’s effective tax rate measure is calculated as the ratio of corporate income taxes paid over pretax corporate income. The results from these two studies are summarized in figure 9. Collins and Shackelford (2003) and Chennells and Griffith (1997) both use Standard and Poor’s Compustat Global database to estimate effective tax rates for small selections of major industrial nations (see figure 10). The Compustat Global database is limited to information on foreign firms that people have requested and, therefore, is likely not to be a representative sample of companies, but weighted toward larger and more recognized firms. While Collins and Shackelford provide estimates of effective tax rates separately for multinational firms, the average effective tax rates listed in figure 10 are for all companies. They use an effective tax rate measure similar to the second measure used by Buijink, et al.; they also compute an alternative estimate that uses a less comprehensive measure of income, but one that has greater comparability across countries. The authors address the outlier issue by excluding cases with negative tax rates or rates over 70 percent. Chennells and Griffith’s effective tax rate measure is similar to the first measures of Collins and Shackelford and Buijink, et al., except that they do not make the adjustment for deferred taxes. In addition to the contact named above, James Wozny, Assistant Director; Susan Baker; Sylvia Bascope; Kathleen Easterbrook; Jennifer Gravelle; Ed Nannenhorn; and Cheryl Peterson made key contributions to this report.
U.S. and foreign tax regimes influence decisions of U.S. multinational corporations (MNC) regarding how much to invest and how many workers to employ in particular activities and in particular locations. Tax rules also influence where corporations report earning income for tax purposes. The average effective tax rate, which equals the amount of income taxes a business pays divided by its pretax net income (measured according to accounting rules, not tax rules), is a useful measure of actual tax burdens. In response to a request from U.S. Senate Committee on Finance, this report provides information on the average effective tax rates that U.S.-based businesses pay on their domestic and foreign-source income and trends in the location of worldwide activity of U.S.-based businesses. GAO analyzed Internal Revenue Service (IRS) data on corporate taxpayers, including new data for 2004 and Bureau of Economic Analysis data on the domestic and foreign operations of U.S. MNCs. Data limitations are noted where relevant. GAO is not making any recommendations in this report. The average U.S. effective tax rate on the domestic income of large corporations with positive domestic income in 2004 was an estimated 25.2 percent. There was considerable variation in tax rates across these taxpayers. The average U.S. effective tax rate on the foreign-source income of these large corporations was around 4 percent, reflecting the effects of both the foreign tax credit and tax deferral on this type of income. Effective tax rates on the foreign operations of U.S. MNCs vary considerably by country. According to estimates for 2004, Bermuda, Ireland, Singapore, Switzerland, the United Kingdom (UK) Caribbean Islands, and China had relatively low rates among countries that hosted significant shares of U.S. business activity, while Italy, Japan, Germany, Brazil, and Mexico had relatively high rates. U.S. business activity (measured by sales, value added, employment, compensation, physical assets, and net income) increased in absolute terms both domestically and abroad from 1989 through 2004, but the relative share of activity that was based in foreign affiliates increased. Nevertheless, as of 2004, over 60 percent of the activity (by all six measures) of U.S. MNCs remained located in the United States. The U.K., Canada, and Germany are the leading foreign locations of U.S. businesses by all measures except income. Reporting of the geographic sources of income is susceptible to manipulation for tax planning purposes and appears to be influenced by differences in tax rates across countries. Most of the countries studied with relatively low effective tax rates have income shares significantly larger than their shares of the business measures least likely to be affected by income shifting practices: physical assets, compensation, and employment. The opposite relationship holds for most of the high tax countries studied.
Over time, the military services report they have increasingly lost training range capabilities because of encroachment. According to DOD officials, the concerns about encroachment reflect the cumulative result of a slow but steady increase in problems affecting the use of their training ranges. Historically, specific encroachment problems have been addressed at individual ranges, most often on an ad hoc basis. DOD officials have reported increased limits on and problems with access to and the use of ranges. They believe that the gradual accumulation of these limitations will increasingly threaten training readiness in the future. Yet, despite the reported loss of some capabilities, for the most part, the services do not report the extent to which encroachment has significantly affected training readiness. Section 366 of the Bob Stump National Defense Authorization Act for Fiscal Year 2003 required that the Secretary of Defense develop a comprehensive plan for using existing authorities available to the Secretaries of Defense and the military departments to address training constraints caused by limitations on the use of military lands, marine areas, and airspace that are available in the United States and overseas for training. Section 366 also required that the Secretary of Defense develop and maintain an inventory that identifies all available operational training ranges, all training range capacities and capabilities, and any training constraints at each training range. In addition, the Secretary must complete an assessment of current and future training range requirements and an evaluation of the adequacy of current DOD resources to meet current and future training requirements. Section 366 further required that the Secretary of Defense submit to the Congress a report containing the plan, the results of the assessment and evaluation of current and future training requirements, and any recommendations that the Secretary may have for legislative or regulatory changes to address training constraints at the same time the President submits the budget for fiscal year 2004 and provide status reports on implementation annually between fiscal years 2005 and 2008. While the initial report was due when the President submitted the fiscal year 2004 budget to the Congress, the department did not meet this initial reporting requirement. In an effort to obtain assistance from the military services in preparing this report, a January 2003 memorandum to the Secretaries of the Army, the Navy, and the Air Force, the Under Secretary of Defense for Personnel and Readiness directed that each of the military services develop a single standalone report that could be consolidated to form OSD’s overall report. Each service was expected to provide an assessment of current and future training requirements with future projections to 2024, a report on the implementation of a range inventory system, an evaluation of the adequacy of current service resources to meet both current and future training requirements, and a comprehensive plan to address constraints resulting in adverse training impacts. The memorandum stated that once the services’ inputs were received, they would be incorporated into a single report to address the section 366 reporting requirement. As discussed more fully later, the services’ inputs were incorporated to varying degrees in OSD’s final training range report. In completing our analysis for this and other engagements related to training ranges, we found that the department and the military services individually have a number of initiatives underway to better address encroachment or other factors and ensure sustainability of military training ranges for future use. In August 2001, the department issued its draft Sustainable Range Action Plans, which contained an action plan for each of the eight encroachment issues. Each action plan provided an overview and analysis of its respective encroachment issue along with strategies and actions for consideration by DOD decision makers. The department considered these action plans to be working documents supporting the overall sustainable range initiative. In June 2003, the Under Secretary of Defense for Personnel and Readiness issued a memorandum to the secretaries of the military departments providing guidance for sustainable range planning and programming efforts for fiscal years 2006-2011. The services, recognizing the importance of ranges, have begun to implement various internal programs aimed at ensuring long-term range sustainment and the ability to meet both current and future requirements. In addition, OSD and the services have various systems to assess the condition of their ranges and are attempting to develop methods to reflect the readiness impacts caused by encroachment and other factors. Our recent work and the work of the DOD Inspector General have identified a variety of factors that have adversely affected training ranges in recent years including a lack of adequate funding, maintenance, and modernization for training ranges. The Army Deputy Chief of Staff for Training is responsible for establishing range priorities and requirements and managing the Range and Training Land Program, which includes range modernization and maintenance, and land management through the Integrated Training Area Management Program. This office is creating and implementing the Sustainable Range Program to manage its ranges in a more comprehensive manner; meet the challenges brought on by encroachment; and maximize the capability, availability, and accessibility of its ranges. According to an official of the Office of the Army Deputy Chief of Staff for Training, the Sustainable Range Program will evolve into a new Army training range regulation that will replace the current Army Regulation 210-21, Range and Training Land Program, and Army Regulation 350-4, Integrated Training Area Management. On December 1, 2003, the Navy centralized its range management functions, to include training and testing ranges, target development and procurement, and test and evaluation facilities, into the Navy Range Office, Navy Ranges and Fleet Training Branch. The Navy Range Office integration will streamline processes, provide a single voice for range policy and management oversight, and provide a single resource sponsor. Recognizing the importance of Navy training ranges and to meet congressional reporting requirements, the Navy is developing a Navy Range Strategic Plan. The Navy plans to have this completed by June 2004. In addition, the Navy is working with the Center for Naval Analysis to develop a transferable analytical tool for systematic and rigorous range assessment. This tool is expected to integrate existing initiatives, such as the range complex management plans, the Navy mission essential tasks lists, and an encroachment log, into a methodology to identify, assess, and prioritize physical range resource deficiencies—to include those caused by encroachment issues—across ranges. An official of the Navy Range Office stated that the Navy plans to pilot the tool at the Southern California Complex by November 2004. In October 2001, the Marine Corps established an executive agent for range and training area management to implement its vision for mission- capable ranges. The Range and Training Area Management Division is located within the Training and Education Command. These offices are charged with developing systems, operational doctrine, and training requirements for Marine Corps forces. In addition to its own ranges, the Marine Corps engages in extensive cross-service utilization by depending on extensive and extended access to non-Marine Corps training ranges. The Air Force’s Director of Operations and Training, Ranges and Airspace Division acts as the executive agent for range management for the Air Force. The associate director for ranges and airspace stated that Air Force range issues have become much more sensitive due to a number of recent events, including the Navy’s departure from Vieques, Puerto Rico; controversy with the Mountain Home Range, Idaho; the loss of naval ranges in Hawaii; and the push to redesign the national air space. As a result, Air Force leadership has become more aware of range needs. The Air Force has an integrated approach to range management, to include range planning, operations, construction, and maintenance. Air Force Range Planning and Operations Instruction is the primary document governing Air Force planning as it relates to its ranges. In addition, the Air Force, using RAND, has conducted two studies addressing its training requirements and training range capacities, capabilities, and constraints. In general, the studies found that the Air Force’s training ranges did not always meet the services’ training requirements. For example, one study found that the distance between Air Force training ranges and bases exceeded the established flying limitation for 19 percent of the total air-to-ground training requirements for fighter jets. In 2002, the department prepared and submitted to the Congress a package of legislative proposals to modify or clarify existing environmental legislation to address encroachment issues. The proposals, known as the Readiness and Range Preservation Initiative, were tailored to protect military readiness activities, not the entire scope of DOD activities. The proposals sought, among other things, to clarify provisions of the Endangered Species Act; Marine Mammal Protection Act; Clean Air Act; Solid Waste Disposal Act; Resource Conservation and Recovery Act; Comprehensive Environmental Restoration, Compensation, and Liability Act; and the Migratory Bird Treaty Act. The Bob Stump National Defense Authorization Act for Fiscal Year 2003 enacted three provisions, including two that allow DOD to cooperate more effectively with third parties on land transfers for conservation purposes, and a third that provides a temporary exemption from the Migratory Bird Treaty Act for the unintentional taking of migratory birds during military readiness activities. In March 2003, the department submitted five provisions to the Congress; the National Defense Authorization Act for Fiscal Year 2004 enacted two provisions including a clarification of “harassment” under the Marine Mammal Protection Act and allowing approved Integrated Natural Resource Management Plans to substitute for critical habitat designation under the Endangered Species Act. DOD submitted proposed legislation to the Congress on April 6, 2004, in a continuing effort to clarify provisions of the Clean Air Act; Comprehensive Environmental Response, Compensation, and Liability Act; and the Resource Conservation and Recovery Act. In 2002, we issued two reports on the effects of encroachment on military training and readiness. In April 2002, we reported that troops stationed outside of the continental United States face a variety of training constraints that have increased over the last decade and are likely to increase further. In June 2002, we reported on the impact of encroachment on military training ranges inside the United States and had similar findings to our earlier report. We reported that many encroachment issues resulted from or were exacerbated by population growth and urbanization. DOD was particularly affected because urban growth near 80 percent of its installations exceeded the national average. In both reports, we stated that impacts on readiness were not well documented. In our June 2002 report, we recommended that (1) the services develop and maintain inventories of their training ranges, capacities, and capabilities, and fully quantify their training requirements considering complementary approaches to training; (2) OSD create a DOD database that identifies all ranges available to the department and what they offer, regardless of service ownership, so that commanders can schedule the best available resources to provide required training; (3) OSD finalize a comprehensive plan for administrative actions that includes goals, timelines, projected costs, and a clear assignment of responsibilities for managing and coordinating the department’s efforts to address encroachment issues on military training ranges; and (4) OSD develop a reporting system for range sustainability issues that will allow for the elevation of critical training problems and progress in addressing them to the Senior Readiness Oversight Council for inclusion in Quarterly Readiness Reports to the Congress as appropriate. In addition, we testified twice on these issues—in May 2002 and April 2003. In September 2003, we also reported that through increased cooperation DOD and other federal land managers could share the responsibility for managing endangered species. In March 2004, we issued a guide to help managers assess how agencies plan, design, implement, and evaluate effective training and development programs that contribute to improved organizational performance and enhanced employee skills and competencies. The framework outlined in this guide summarizes attributes of effective training and development programs and presents related questions concerning the components of the training and development process. Over time, assessments of training and development programs using this framework can further identify and highlight emerging and best practices, provide opportunities to enhance coordination and increase efficiency, and help develop more credible information on the level of investment and the results achieved across the federal government. OSD’s training range inventory does not yet contain sufficient information to use as a baseline for developing a comprehensive training range plan. As a result, OSD’s report does not include a comprehensive plan to address training constraints caused by limitations on the use of military lands, marine areas, and airspace in the United States and overseas, as required by section 366. Without a comprehensive plan that identifies quantifiable goals or milestones for tracking planned actions and measuring progress, or projected funding requirements, it will be difficult for OSD to comply with the legislative requirement to report annually on its progress in implementing the plan. OSD’s training range inventory, which is a compilation of the individual services’ inventories, does not contain sufficient information to provide a baseline for developing a comprehensive training range sustainment plan. Section 366 requires the Secretary of Defense to develop and maintain an inventory that identifies all available operational training ranges, all training range capacities and capabilities, and any training constraints at each training range. Although OSD’s inventory lists the services’ training ranges as of November 2003 and identifies capabilities, the inventory does not identify specific range capacities or existing training constraints caused by encroachment or other factors, such as a lack of adequate maintenance or modernization. Nevertheless, to date, this is the best attempt we have identified by the services to inventory their training ranges. In doing so, OSD and the services provided more descriptive examples of constraints than ever before but did not fully identify the actual impacts on training. Without such information, it is difficult to develop a meaningful plan to address training constraints caused by encroachment or other factors. While OSD’s inventory is a consolidated list of ranges and capabilities as of November 2003, OSD and the services’ inventories are not integrated and accessibility is limited. Therefore, it is not a tool that commanders could use to identify range availability, regardless of service ownership, and schedule the best available resources to provide required training. In addition, OSD has no method to continuously maintain this inventory without additional requests for data, even though section 366 requires the Secretary of Defense to maintain and submit an updated inventory annually to the Congress. In 2001, RAND concluded that centralized repositories of information on Air Force ranges and airspace are limited, with little provision for updating the data. RAND noted that a comprehensive database is a powerful tool for range and airspace managers that must be continuously maintained and updated. In addition, a knowledgeable official of the Office of the Under Secretary of Defense for Personnel and Readiness stated that having a common management system to share current range information is needed to identify range availability, capabilities, capacities, and cumulative effects of encroachment on training readiness. This official also noted that it would take several years to develop such a system. However, OSD did not address this system in its report. Without an inventory that fully identifies available training resources, specific capacities and capabilities, and existing training constraints, it is difficult to frame a comprehensive training range plan to address constraints. As a result, OSD’s report does not include a comprehensive plan to address training constraints caused by limitations on the use of military lands, marine areas, and airspace that are available in the United States and overseas for training—as required by section 366. Such a plan was to include proposals to enhance training range capabilities and address shortfalls, goals, and milestones for tracking planned actions and measuring progress, projected funding requirements for implementing planned actions, and designation of OSD and service offices responsible for overseeing implementation of the plan. However, OSD’s report does not contain quantifiable goals or milestones for tracking planned actions and measuring progress, or projected funding requirements, which are critical elements of a comprehensive plan. Rather than a comprehensive plan, OSD and service officials characterized the report as a status report of the services’ efforts to address encroachment that also includes service proposals to enhance training range capabilities, as previously discussed in the background, and designates OSD and service offices responsible for overseeing implementation of a comprehensive training range plan. According to a knowledgeable official of the Office of the Under Secretary of Defense for Personnel and Readiness, by providing the Congress a report on the current status of the individual services’ efforts to put management systems in place to address encroachment issues and ensure range sustainability, OSD believed it was meeting the mandated requirements. A professional journal article on sustaining DOD ranges, published by knowledgeable defense officials in 2000, notes that there should be some form of a national range comprehensive plan that provides the current situation, establishes a vision with goals and objectives for the future, and defines the strategies to achieve them. The article states that only with such a comprehensive plan can sustainable ranges and synergy be achieved. In addition, the article notes that while this plan should be done at the department-level, “DOD’s bias will be to have the services do individual plans.” In fact, OSD and service officials told us during our review that OSD should not be responsible for framing a comprehensive training range plan because the services are responsible for training issues. Despite that view, OSD has recently issued a comprehensive strategic plan and associated implementation plan—which includes all of the above elements—for more broadly transforming DOD’s training. OSD’s Implementation of the Department of Defense Training Range Comprehensive Plan report, which is a consolidation of information provided by the services, does not fully meet other requirements mandated by section 366. Specifically, it does not (1) fully assess current and future training range requirements; (2) fully evaluate the adequacy of current DOD resources, including virtual and constructive assets, to meet current and future training range requirements; (3) identify recommendations for legislative or regulatory changes to address training constraints; or (4) contain plans to improve the readiness reporting system. OSD’s report does not fully assess current and future training range requirements. Instead, the report describes the services’ processes to develop, document, and execute current training and training range requirements. The services’ inputs, as required by OSD’s guidance, vary in their emphasis on individual areas of requested information. Only the Air Force’s submission to OSD’s report identifies specific annual training requirements by type of aircraft, mission category, type of training activity, and unit. By identifying its training requirements, the Air Force is in a better position to evaluate the adequacy of resources to meet current and future training requirements. Without a complete assessment, OSD and the services cannot determine whether available training resources are able to meet current and future requirements. OSD’s report does not fully evaluate the adequacy of current DOD resources to meet current and future training range requirements in the United States and overseas. The report does not compare training range requirements to existing resources—a primary method to evaluate the adequacy of current resources—in the United States and does not evaluate overseas training resources. Instead, OSD’s report states that generally the services’ ranges allow military forces to accomplish most of the current training missions. However, this conflicts with later statements in the report noting that encroachment limits the services’ ability to meet current core and joint training requirements. For example, OSD’s report discusses an evaluation of the Air Force’s ranges in the United States, and identifies shortfalls in the Air Force’s range resources and constraints that affect operations. The evaluation shows that the distance between Air Force training ranges and bases exceeded the established flying limitation for 19 percent of the total air-to-ground training requirements for fighter jets. The report also notes that the Army has shortages of modernized or automated ranges and has a significant overage of older ranges that do not fully meet current training requirements, but the report does not identify where these shortages occur or explain how this determination was made. In addition, the report states that 28 of 35 Army range categories have some or major deficiencies that do not meet Army standards, or impair or significantly impair mission performance. The report further notes the condition of Marine Corps ranges and provides a general rating of the ranges by installation but does not identify specific shortfalls in resources or evaluate the adequacy of current resources to meet future training range requirements. OSD’s report also notes that simulation plays a role in military training, but does not address the relative impact or adequacy of simulated training to meet current and future training range requirements, or to what extent simulation may help minimize constraints affecting training ranges. While OSD’s report does not include any recommendations for legislative or regulatory changes to address training constraints, DOD submitted proposed legislation to the Congress on April 6, 2004, in an effort to clarify the intent of the Clean Air Act; Comprehensive Environmental Response, Compensation, and Liability Act; and the Resource Conservation and Recovery Act. Without these clarifications, according to DOD officials, the department would continue to potentially face lawsuits that could force the services to curtail training activities. According to DOD, the clarifications are to (1) grant test ranges a 3-year extension from complying with the Clean Air Act requirement when new units or weapons systems are moved to a range and (2) exempt military munitions at training ranges from provisions of the Comprehensive Environmental Response, Compensation, and Liability Act and Resource Conservation and Recovery Act to avoid the classification of munitions as solid waste, which could required expensive cleanup activities. OSD’s report does not address the department’s plans to improve the readiness reporting system, called the Global Status of Resources and Training System, as required by the mandate. According to a knowledgeable OSD official, the Global Status of Readiness and Training System is not the system to capture encroachment impacts that are long-term in nature, rather it addresses short-term issues. Instead, according to an OSD official, the department is working on a Defense Readiness Reporting System, which is expected to capture range availability as well as other factors that may constrain training. However, OSD did not address either system in its report. While OSD’s Implementation of the Department of Defense Training Range Comprehensive Plan report addresses some of the mandated requirements, it does not fulfill the requirement for an inventory identifying range capacities or training constraints caused by encroachment or other factors, such as a lack of adequate maintenance or modernization; a comprehensive training range plan to address encroachment on military training ranges; an adequate assessment of current and future training range requirements; a sufficient evaluation of the adequacy of current DOD resources, including virtual and constructive assets, to meet current and future training range requirements; recommendations for legislative or regulatory changes to address training constraints; or plans to improve the readiness reporting system. Instead, the report provides the current status of the services’ various sustainable range efforts in the United States. Currently, OSD’s inventory consists of individual services’ inputs as of November 2003, but it is not a tool that commanders could use to identify range availability, regardless of service ownership, and schedule the best available resources to provide required training. In addition, OSD apparently has no planned method to continuously maintain this inventory. Without an integrated training range inventory that could be continuously updated and available at all command levels, the services may not have knowledge of or access to the best available training resources. This inventory may also have a significant impact on the ability of the services to support joint training. Also, without such an inventory, it will be difficult for OSD and the services to develop a comprehensive plan to address these issues to ensure range sustainability to support current and future training range requirements. As a result, even though various services’ initiatives are underway to better address encroachment or other factors and ensure sustainability of military training ranges for future use, OSD’s training range report did not include a comprehensive plan to address training constraints in the United States and overseas—as required by section 366. Without a plan that includes quantifiable goals and milestones for tracking planned actions and measuring progress, and projected funding requirements, OSD and the services may not be able to address the ever-growing issues associated with encroachment and measure the progress in addressing these issues. Similarly, OSD’s training range report did not fully assess current and future training range requirements or fully evaluate the adequacy of current resources to meet these requirements. Without these types of analyses, OSD and the services will not be able to determine shortfalls in training resources to better allocate training resources and may continue to maintain ranges that are no longer needed to meet current training requirements. Finally, the report did not include any recommendations for legislative or regulatory changes to address training constraints or a plan to improve the readiness reporting system to reflect the impact on readiness caused by training constraints due to limitations on the use of training ranges. Without an inventory identifying range capacities or training constraints caused by encroachment or other factors or a comprehensive training range plan to address training constraints caused by limitations on the use training ranges, OSD and the services will continue to rely on incomplete information to support funding requests and legislative or regulatory changes to address these issues. To serve as the baseline for the comprehensive training range plan required by section 366, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Personnel and Readiness and the secretaries of the military services to jointly develop an integrated training range database that identifies available training resources, specific capacities and capabilities, and training constraints caused by limitations on the use of training ranges, which could be continuously updated and shared among the services at all command levels, regardless of service ownership. To improve future reports, we also recommend that OSD provide a more complete report to the Congress to fully address the requirements specified in the section 366 mandate by (1) developing a comprehensive plan that includes quantifiable goals and milestones for tracking planned actions and measuring progress, and projected funding requirements to more fully address identified training constraints, (2) assessing current and future training range requirements and evaluating the adequacy of current resources to meet these requirements, and (3) developing a readiness reporting system to reflect the impact on readiness caused by training constraints due to limitations on the use of training ranges. In commenting on a draft of this report, the Deputy Under Secretary of Defense for Readiness disagreed with our finding that OSD’s training range report failed to address the congressional reporting requirements mandated in section 366 of the Bob Stump National Defense Authorization Act for Fiscal Year 2003 and disagreed with three of our four recommendations. As it clearly points out, this report outlines numerous instances where OSD’s report did not address congressionally mandated reporting requirements. Our recommendations were intended to help DOD address all requirements specified in section 366. Without their implementation, DOD will continue to rely on incomplete information to support funding requests and legislative or regulatory changes to address encroachment and other factors. DOD disagreed with our first recommendation—to jointly develop an integrated training range database that identified available training resources, specific capacities and capabilities, and training constraints, which could be continuously updated and shared among all the services at all command levels regardless of service ownership. As discussed in our report, OSD’s inventory consists of individual services’ inputs as of November 2003 and is not a tool that commanders could use to identify range availability, regardless of service ownership, and schedule the best available resources to provide required training. Further, as noted in our report, the individual service submissions continue to provide limited information on how training has been constrained by encroachment or other factors. In contrast, section 366 clearly requires the Secretary of Defense to develop and maintain an inventory that identifies all available operational training ranges, all training range capacities and capabilities, and any training constraints at each training range. DOD’s suggestion that our draft report recommended that DOD should initiate a “massive new database” effort to allow OSD management of individual range activities is without merit. Our recommendation merely specified section 366 legislative requirements that were not found in OSD’s training range report to the Congress. Also, DOD’s disagreement with our first recommendation seems inconsistent with other comments DOD officials have made as noted in this and other GAO reports regarding military training range inventories. In commenting on this report, DOD specifically stated that it agreed that, as a long-term goal, the services’ inventory systems should be linked to support joint use. In commenting on a prior report, DOD stated that the services were developing a statement of work in order to contract with a firm capable of delivering an enterprise level web-enabled system that will allow cross service, as well as intra-service training use of inventory data. Further, in a 2003 study, the U.S. Special Operations Command stated that all components needed to create master range plans that addressed their current and future range issues and solutions. The command also recommended that plans identify and validate training requirements and facilities available and define the acceptable limits of workarounds. Without an integrated training range inventory, we continue to believe that it will be difficult for OSD and the services to develop a comprehensive plan and track its progress in addressing training constraints and ensuring range sustainability. DOD generally concurred with our second recommendation—to develop a comprehensive plan that includes quantifiable goals and milestones for tracking planned actions and measuring progress, and projected funding requirements to more fully address identified training constraints. However, the department’s comments suggest it plans simply to summarize ongoing efforts of individual services rather than formulate a comprehensive strategy for addressing training constraints. Without a plan that includes quantifiable goals and milestones for tracking planned actions and measuring progress, and projected funding requirements, OSD and the services may not be able to address the ever-growing issues associated with encroachment and other training constraints and measure the progress in addressing these issues. Also, a summary of ongoing efforts does not fully address the requirements of section 366, which calls for a comprehensive plan for using existing authorities available to the Secretaries of Defense and the military departments to address training constraints caused by limitations on the use of military lands, marine areas, and airspace that are available in the United States and overseas for training. Second, it directly contradicts DOD’s concurrence with recommendations made in our June 2002 report where we specifically recommended that the department develop a plan with the same elements subsequently required by the mandate. Third, it contradicts a January 2003 report of the Southwest Region Range Sustainability Conference sponsored by the Deputy Under Secretary of Defense for Readiness and the Deputy Under Secretary of Defense for Installations and Environment. The conference report recommended a national range sustainability and infrastructure plan—which could also address section 366 requirements—to include range requirements, overall vision, current and future requirements, and encroachment issues. Without a comprehensive plan that includes quantifiable goals and milestones for tracking planned actions and measuring progress, and projected funding requirements, we continue to believe that OSD and the services may not be able to address the ever-growing issues associated with encroachment and other training constraints, and measure the progress in addressing these issues. DOD disagreed with our third recommendation—to assess current and future training range requirements and evaluate the adequacy of current resources to meet these requirements. It stated that it is inappropriate and impractical to include this level of detail in an OSD-level report and that the Congress is better served if the department describes, summarizes, and analyzes range requirements. Clearly, these statements are contradictory in that section 366 requires that OSD report on its assessment of current and future training range requirements and an evaluation of the adequacy of current DOD resources to meet current and future training requirements, which could be accomplished by providing the aforementioned description, summary, and analysis of range requirements. While the department’s training range report provided a description of the methodology used by each service to develop their requirements, it did not provide any detail regarding such analyses. Without these types of analyses, we continue to believe that OSD and the services will not be able to determine shortfalls in training resources to better allocate training resources and may continue to maintain ranges that are no longer needed to meet current training requirements. In addition, the department questions why we did not examine detailed requirements work being done at each installation. While we agree with DOD that this type of examination could be useful, it is unclear why OSD’s report did not provide a discussion of the work underway at individual installations. While we may conduct such an examination in the future, section 366 did not specifically require us to conduct this examination, nor did it provide us sufficient time for such an examination. DOD disagreed with our fourth recommendation—to develop a readiness reporting system to reflect the impact on readiness caused by training constraints. DOD further stated that it was inappropriate to modify the Global Status of Readiness and Training System report to address encroachment and that it plans to incorporate encroachment impacts on readiness into the Defense Readiness Reporting System. Our draft report recognized that the department does not believe that the Global Status of Readiness and Training System is the system to capture encroachment impacts. Given that OSD’s training range reports are required to provide a status of efforts to address training constraints, it is unclear why OSD’s report did not provide an assessment of progress in this area. We continue to believe that future reports should provide the Congress with information on DOD’s progress toward improving readiness reporting— whether it is the Defense Readiness Reporting System as cited in DOD’s comments or another system—to reflect the impact on readiness caused by training constraints due to limitations on the use of training ranges, as required by section 366. We continue to believe our recommendations are valid and without their implementation, DOD will continue to rely on incomplete information to support funding requests and legislative or regulatory proposals to address encroachment and other training constraints, and will not be able to fully address the congressionally mandated requirements in section 366. The Deputy Under Secretary’s comments are included in appendix II. To determine the extent to which OSD’s training range inventory contains sufficient information to develop a comprehensive training range plan, we reviewed OSD’s inventory of the services’ training ranges to determine whether the inventory identified training capacities and capabilities, and constraints caused by encroachment or other factors for each training range. In addition, we reviewed the services’ inputs to OSD’s inventory and OSD’s report for a comprehensive training range plan. We also discussed OSD’s inventory and the services’ inputs and the need for a comprehensive training range plan with officials from the Office of the Director of Readiness and Training, Office of the Under Secretary of Defense, Personnel and Readiness; and a representative of the contractor, who compiled the report. Also, we reviewed two RAND studies on Air Force ranges and airspace. To determine the extent to which OSD’s Implementation of the Department of Defense Training Range Comprehensive Plan report meets other requirements mandated by section 366, we reviewed the report to determine if it contained an assessment of current and future training range requirements; an evaluation of the adequacy of current DOD resources, including virtual and constructive assets, to meet current and future training range requirements; recommendations for legislative or regulatory changes to address training constraints; and plans to improve the readiness reporting system. To obtain further clarification and information, we reviewed the individual submissions from the Army, Navy, Marine Corps, and Air Force. We also discussed OSD’s report and the services’ inputs with officials from the Office of the Director of Readiness and Training, Office of the Under Secretary of Defense, Personnel and Readiness; the Office of the Director, Training Directorate, Training Simulations Division, Office of the Deputy Chief of Staff, Department of the Army; the Navy Ranges and Fleet Training Branch, Fleet Readiness Division, Fleet Readiness and Logistics, Office of the Deputy Chief of Naval Operations; the Range and Training Area Management Division, Training and Education Command, Headquarters, Marine Corps; and the Office of the Director of Ranges and Airspace, Air and Space Operations, Headquarters, Air Force. We also met with a representative of the contractor who compiled the report. To determine what guidance the services were given when preparing their submission to the department’s report, we also reviewed the January 28, 2003, memorandum from the Under Secretary of Defense for Personnel and Readiness to the military services. We also reviewed DOD’s Sustainment of Ranges and Operating Areas directive that establishes policy and assigns responsibilities for the sustainment of test and training ranges and the department’s Strategic Plan for Transforming DOD Training and Training Transformation Implementation Plan. We assessed the reliability of the data in OSD’s report by (1) reviewing existing information about military training ranges, (2) interviewing OSD and service officials knowledgeable about the report and training ranges, and (3) examining the data elements in the report by comparing known statistics and information. We determined that the data were sufficiently reliable for the purposes of this report. We are sending copies of this report to the appropriate congressional committees, as well as the Secretaries of Defense, the Army, the Navy, and the Air Force, and the Director, Office of Management and Budget. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions on the matters discussed in this letter, please contact me at (202) 512-8412, or my Assistant Director, Mark A. Little, at (202) 512-4673. Patricia J. Nichol, Tommy Baril, Steve Boyles, and Ann DuBois were major contributors to this report. SEC. 366. Training Range Sustainment Plan, Global Status of Resources and Training System, and Training Range Inventory. (a) PLAN REQUIRED—(1) The Secretary of Defense shall develop a comprehensive plan for using existing authorities available to the Secretary of Defense and the Secretaries of the military departments to address training constraints caused by limitations on the use of military lands, marine areas, and airspace that are available in the United States and overseas for training of the Armed Forces. (2) As part of the preparation of the plan, the Secretary of Defense shall conduct the following: (A) An assessment of current and future training range requirements of the Armed Forces. (B) An evaluation of the adequacy of current Department of Defense resources (including virtual and constructive training assets as well as military lands, marine areas, and airspace available in the United States and overseas) to meet those current and future training range requirements. (3) The plan shall include the following: (A) Proposals to enhance training range capabilities and address any shortfalls in current Department of Defense resources identified pursuant to the assessment and evaluation conducted under paragraph (2). (B) Goals and milestones for tracking planned actions and measuring progress. (C) Projected funding requirements for implementing planned actions. (D) Designation of an office in the Office of the Secretary of Defense and in each of the military departments that will have lead responsibility for overseeing implementation of the plan. (4) At the same time as the President submits to Congress the budget for fiscal year 2004, the Secretary of Defense shall submit to Congress a report describing the progress made in implementing this subsection, including— (A) the plan developed under paragraph (1); (B) the results of the assessment and evaluation conducted under paragraph (2); and (C) any recommendations that the Secretary may have for legislative or regulatory changes to address training constraints identified pursuant to this section. (5) At the same time as the President submits to Congress the budget for each of fiscal years 2005 through 2008, the Secretary shall submit to Congress a report describing the progress made in implementing the plan and any additional actions taken, or to be taken, to address training constraints caused by limitations on the use of military lands, marine areas, and airspace. (b) READINESS REPORTING IMPROVEMENT—Not later than June 30, 2003, the Secretary of Defense, using existing measures within the authority of the Secretary, shall submit to Congress a report on the plans of the Department of Defense to improve the Global Status of Resources and Training System to reflect the readiness impact that training constraints caused by limitations on the use of military lands, marine areas, and airspace have on specific units of the Armed Forces. (c) TRAINING RANGE INVENTORY—(1) The Secretary of Defense shall develop and maintain a training range inventory for each of the Armed Forces— (A) to identify all available operational training ranges; (B) to identify all training capacities and capabilities available at each training range; and (C) to identify training constraints caused by limitations on the use of military lands, marine areas, and airspace at each training range. (2) The Secretary of Defense shall submit an initial inventory to Congress at the same time as the President submits the budget for fiscal year 2004 and shall submit an updated inventory to Congress at the same time as the President submits the budget for fiscal years 2005 through 2008. (d) GAO EVALUATION—The Secretary of Defense shall transmit copies of each report required by subsections (a) and (b) to the Comptroller General. Within 60 days after receiving a report, the Comptroller General shall submit to Congress an evaluation of the report. (e) ARMED FORCES DEFINED—In this section, the term ‘Armed Forces’ means the Army, Navy, Air Force, and Marine Corps.
Section 366 of the National Defense Authorization Act for Fiscal Year 2003 required the Secretary of Defense to develop a report outlining a comprehensive plan to address training constraints caused by limitations on the use of military lands, marine areas, and air space that are available in the United States and overseas for training. The foundation for that plan is an inventory identifying training resources, capacities and capabilities, and limitations. In response to section 366, this report discusses the extent to which (1) the Office of the Secretary of Defense's (OSD) training range inventory is sufficient for developing the comprehensive training range plan and (2) OSD's 2004 training range report meets other requirements mandated by section 366. OSD's training range inventory does not yet contain sufficient information to use as a baseline for developing the comprehensive training range plan required by section 366. As a result, OSD's training range report does not lay out a comprehensive plan to address training constraints caused by limitations on the use of military lands, marine areas, and air space that are available in the United States and overseas for training. First, OSD's training range inventory does not fully identify available training resources, specific capacities and capabilities, and existing training constraints caused by encroachment or other factors to serve as the baseline for the comprehensive training range plan. Second, OSD and the services' inventories are not integrated, readily available, or accessible by potential users so that commanders can schedule the best available resources to provide the required training. Third, OSD's training range report does not include a comprehensive plan with quantifiable goals or milestones for tracking planned actions to measure progress, or projected funding requirements needed to implement the plan. Instead, the report provides the current status of the four services' various sustainable range efforts in the United States, which if successful, overtime should provide a more complete picture of the magnitude and impact of constraints on training. OSD's training range report does not fully address other requirements mandated by section 366. For example, the report does not: (1) fully assess current and future training range requirements; (2) fully evaluate the adequacy of current resources to meet current and future training range requirements in the United States and overseas; (3) identify recommendations for legislative or regulatory changes to address training constraints, even though the Department of Defense (DOD) submitted legislative changes for congressional consideration on April 6, 2004; or (4) contain plans to improve readiness reporting.
Four federal banking regulators—FDIC, the Federal Reserve, OCC, and OTS—oversee the nation’s banks and thrifts to ensure they are operating in a safe and sound manner. The failure of more than 2,900 depository institutions during the 1980s and early 1990s led to the passage of FDICIA, which amended FDIA to require regulators to take action against institutions that failed to meet minimum capital levels and granted regulators several authorities to address noncapital deficiencies at the institutions they regulate. FDICIA also required FDIC to establish a system to assess the risk of depository institutions insured by the deposit insurance fund. FDIC insures the deposits of all federally insured depository institutions, generally up to $100,000 per depositor, and monitors their risk to the deposit insurance fund. In addition, FDIC is the primary regulator for state-chartered nonmember banks (that is, state-chartered banks that are not members of the Federal Reserve System), the Federal Reserve is the primary regulator for state-chartered member banks (state-chartered banks that are members of the Federal Reserve System) and bank holding companies, OCC is the primary regulator of federally chartered banks, and OTS is the primary regulator of federally and state-chartered thrifts and thrift holding companies. Federal regulators have defined several categories of risk to which depository institutions are exposed—credit risk, compliance risk, legal risk, liquidity risk, market risk, operational risk, reputational risk, and strategic risk (see table 1). Banks and thrifts, in conjunction with regulators, must continually manage risks to ensure their safe and sound operation and protect the well-being of depositors—those individuals and organizations that act as creditors by “loaning” their funds in the form of deposits to institutions to engage in lending and other activities. Regulators are responsible for supervising the activities of banks and thrifts and taking corrective action when these activities and their overall performance present supervisory concerns or have the potential to result in financial losses to the insurance fund or violations of law. Losses to the insurance fund may occur when an institution does not have sufficient assets to reimburse customers’ insured deposits and FDIC’s administrative expenses in the event of closure or merger. Regulators assess the condition of banks and thrifts through off-site monitoring and on-site examinations. Examiners use Reports of Condition and Income (Call Report) and Thrift Financial Report data to remotely assess the financial condition of banks and thrifts, respectively, and to plan the scope of on-site examinations. As part of on-site examinations, regulators more closely assess institutions’ exposure to risk and assign institutions ratings, known as CAMELS ratings, that reflect their condition in six areas: capital, asset quality, management, earnings, liquidity, and sensitivity to market risk. Each component is rated on a scale of 1 to 5, with 1 the best and 5 the worst. The component ratings then are used to develop a composite rating also ranging from 1 to 5. Institutions with composite ratings of 1 or 2 are considered to be in satisfactory condition, while institutions with composite ratings of 3, 4, or 5 exhibit varying levels of safety and soundness problems. Also as part of the examination and general supervision process, regulators may direct an institution to address issues or deficiencies within specified time frames. When regulators determine that a bank or thrift’s condition is unsatisfactory, they may take a variety of supervisory actions, including informal and formal enforcement actions, to address identified deficiencies and have some discretion in deciding which actions to take. Regulators typically take progressively stricter actions against more serious weaknesses. Informal actions generally are used to address less severe deficiencies or when the regulator has confidence that the institution is willing and able to implement changes. Informal actions include, for example, commitment letters detailing an institution’s commitment to undertake specific remedial measures, board resolutions adopted by the institution’s board of directors at the request of its regulator, and memorandums of understanding. Informal actions are not public agreements (meaning, regulators do not make them public through their Web sites or other channels) and are not enforceable by the imposition of sanctions. In comparison, formal enforcement actions are publicly disclosed by regulators and enforceable and are used to address more severe deficiencies or when the regulator has limited confidence in an institution’s ability to implement changes. Formal enforcement actions include, for example, PCA directives, cease-and-desist orders under section 8(b) of FDIA, removal and prohibition orders under section 8(e) of FDIA, civil money penalties, and termination of an institution’s deposit insurance. All four regulators have policies and procedures that describe for examiners the circumstances under which they should recommend the use of informal and formal enforcement actions to address identified deficiencies. Each federal banking regulator also has established a means through which senior management of the applicable federal regulator reviews all enforcement recommendations to ensure that the proposed actions are the best and most efficient means to bring an institution back into compliance with applicable laws, regulations, and best practices. Section 38 of FDIA requires regulators to categorize depository institutions into five categories on the basis of their capital levels. Regulators use three different capital measures to determine an institution’s capital category: (1) a total risk-based capital measure, (2) a tier 1 risk-based capital measure, and (3) a leverage (or non-risk-based) capital measure (see table 2). To be considered well capitalized or adequately capitalized, an institution must meet or exceed all three ratios for the applicable capital category. Institutions are considered undercapitalized or worse if they fail to meet just one of the ratios necessary to be considered at least adequately capitalized. For example, an institution with 9 percent total risk-based capital and 6 percent tier 1 risk-based capital but only 3.5 percent leverage capital would be undercapitalized for PCA purposes. Under section 38, regulators must take increasingly severe supervisory actions as an institution’s capital level deteriorates. For example, all undercapitalized institutions are required to implement capital restoration plans to restore capital to at least the adequately capitalized level, and regulators are generally required to close critically undercapitalized institutions within a 90-day period. Section 38 allows an exception to the 90-day closure rule if both the primary regulator and FDIC concur and document why some other action would better achieve the purpose of section 38—resolving the problems of institutions at the least possible long-term cost to the deposit insurance fund. Resolving failed or failing institutions is one of FDIC’s primary responsibilities under PCA. In selecting the least costly resolution alternative, FDIC’s process is to compare the estimated cost of liquidation—basically, the amount of insured deposits paid out minus the net realizable value of an institution’s assets—with the amounts that potential acquirers bid for the institution’s assets and deposits. FDIC has resolved failed or failing institutions using three basic methods: (1) directly paying depositors the insured amount of their deposits and disposing of the failed institution’s assets (depositor payoff and asset liquidation); (2) selling only the institution’s insured deposits and certain other liabilities, with some of its assets, to an acquirer (insured deposit transfer); and (3) selling some or all of the failed institution’s deposits, certain other liabilities, and some or all of its assets to an acquirer (purchase and assumption). Within this third category, many variations exist based on specific assets that are offered for sale. For example, some purchase and assumption resolutions also have included loss-sharing agreements—an arrangement whereby FDIC, in order to sell certain assets with the intent of limiting losses to the deposit insurance fund, agrees to share with the acquirer the losses on those assets. Section 38 also authorizes several non-capital-based supervisory actions designed to allow regulators some flexibility in achieving the purpose of section 38. Specifically, under section 38(g) regulators are permitted to reclassify or downgrade an institution’s capital category to apply more stringent operating restrictions or requirements if they determine, after notice and opportunity for a hearing, that an institution is in an unsafe and unsound condition or engaging in an unsafe or unsound practice. Under section 38(f)(2)(F) regulators can require an institution to make improvements in management, for example, by dismissing officers and directors who are not able to materially strengthen an institution’s ability to become adequately capitalized. Section 39 directs regulatory attention to noncapital areas of an institution’s operations and activities in three main safety and soundness areas: operations and management; compensation; and asset quality, earnings, and stock valuation. As originally enacted under FDICIA, section 39 required regulators to develop and implement standards in these three areas, as well as develop quantitative standards for asset quality and earnings. However, in response to concerns about the potential regulatory burden of section 39 on banks and thrifts, section 318 of the Riegle Community Development and Regulatory Improvement Act of 1994 amended section 39 to allow the standards to be issued either by regulation (as originally specified in FDICIA) or by guideline and eliminated the requirement to establish quantitative standards for asset quality and earnings. The regulators chose to prescribe the standards through guideline rather than regulation, essentially providing them with flexibility in how and when they would take action against institutions that failed to meet the standards. Under section 39, if a regulator determines that an institution has failed to meet a prescribed standard, the regulator may require that the institution file a safety and soundness plan specifying the steps it will take to correct the deficiency. If the institution fails to submit an acceptable plan or fails to materially implement or adhere to an approved plan, the regulator must require the institution, through the issuance of a public order, to correct identified deficiencies and may take other enforcement actions pending the correction of the deficiency. In addition to adding sections 38 and 39 to FDIA to address capital inadequacy and safety and soundness problems at depository institutions, FDICIA also required FDIC to establish a system—the deposit insurance system—to assess the risk of federally insured depository institutions and charge premiums to finance a deposit insurance fund meant to protect depositors in the event of future bank and thrift failures. At the urging of FDIC, in February 2006 Congress enacted legislation granting the regulator authority to make substantive changes to the deposit insurance system, including the way it assesses the risk of institutions and determines their premiums. In July 2006, FDIC issued its proposed rule outlining proposed changes to the deposit insurance system and opened a public comment period. FDIC adopted a final rule in November 2006. Recalculated premiums and other changes reflected in the final rule were effective January 1, 2007. As of September 30, 2006, FDIC insured over 60 percent of all domestic deposits, totaling more than $4 trillion. The nation’s banks and thrifts have benefited from a strong economy since 1992—as demonstrated by steady increases in several of the industry’s primary performance indicators and growing numbers of institutions meeting or exceeding minimum capital levels. For example, in 2005, the industry reported record total assets ($10 trillion in 2005) and net income ($133 billion in 2005) (see fig. 1). Similarly, the industry’s two primary indicators of profitability—returns on assets and equity—have improved since 1992 and remain near record highs. As a result of institutions’ overall strong financial performance, few have failed to meet minimum capital requirements since 1992, the year regulators implemented PCA. The percentage of well-capitalized institutions has increased from 93.99 percent in 1992 to 99.71 percent in 2005, while the percentage of undercapitalized and lower-rated institutions generally has declined (see fig. 2). For example, the percentage of significantly undercapitalized institutions declined from 2.74 percent (394 institutions) to 0.06 percent (5 institutions) in this period, while the percentage of critically undercapitalized institutions fell from 1.64 percent to 0.01 percent (236 to 1). Further, the percentage of institutions carrying capital in excess of the well-capitalized leverage capital minimum (that is, 5 percent or more of leverage capital) also increased from 84 percent of all reporting institutions in 1992 to 94 percent in 2005. The percentage of institutions carrying at least two times as much capital (200 percent or more of the well-capitalized leverage capital minimum) increased from 25 percent to 41 percent over the period. According to regulators, the improved financial condition of banks and thrifts may have contributed to the sharp decline in the number of problem institutions (those with composite CAMELS ratings of 4 or 5), from 1,063 in 1992 to 74 in 2005 (see fig. 3). Similarly, regulators said that institutions’ improved financial condition may have also contributed to the significant decline in the number of failures and losses to the insurance fund since 1992 (see fig. 4). From 1992 through 2004, the number of failed banks and thrifts fell from 180 (with estimated losses to the insurance fund of $7.3 million) to 4 (with no estimated losses). No bank or thrift failed from June 2004 through January 2007. In addition, regulators’ on-site presence at banks and thrifts increased beginning in the early 1990s, in part as a result of reforms enacted as a part of FDICIA that required regulators to conduct full-scope, on-site examinations for most federally insured institutions at least annually to help contain losses to the deposit insurance fund. Historical data show that the interval between full-scope, on-site examinations for all institutions peaked in 1986 when it reached 609 days. Subsequent to the enactment of FDICIA in December 1991, the average interval between examinations for all institutions declined to 373 days in 1992. Based on information we obtained from all four regulators, the average interval between examinations for all institutions generally has remained from 12 to 18 months since 1993 (the year after FDICIA requirements were implemented) and in many instances has been even shorter, especially for problem institutions. For the sample of banks and thrifts we reviewed, we found that regulators generally implemented PCA in accordance with section 38. For example, when institutions failed to meet minimum capital requirements, regulators required them to submit capital restoration plans or imposed restrictions through PCA directives or other enforcement actions. Regulators generally agreed that capital is a lagging indicator of poor performance and therefore other measures are often used to address deficiencies upon recognition of an institution’s troubled status. This contention was supported by the fact that in a majority of the cases we reviewed, institutions had one or more informal or formal enforcement actions in place prior to becoming undercapitalized. Most of the material loss reviews conducted by IGs also found that regulators appropriately used PCA provisions in most cases, although in two reviews they found that regulators could have used PCA sooner. Based on a sample of cases, we found that regulators generally acted appropriately to address problems at institutions that failed to meet minimum capital requirements by taking increasingly severe enforcement actions as these institutions’ capital deteriorated, as required by section 38. Institutions that fail to meet minimum capital levels face several mandatory restrictions or requirements under section 38 (see fig. 5). Specifically, section 38 requires an undercapitalized institution to submit a capital restoration plan detailing how it is going to become adequately capitalized. When an institution becomes significantly undercapitalized, regulators are required to take more forceful corrective measures, including requiring the sale of equity or debt, or under certain circumstances requiring an institution to be acquired by or merged with another institution; restricting otherwise allowable transactions with affiliates; and restricting the interest rates paid on deposits. In addition to these actions, regulators also may impose other discretionary restrictions or requirements outlined in section 38 that they deem appropriate. After an institution becomes critically undercapitalized, regulators have 90 days to either place the institution into receivership or conservatorship (that is, close the institution) or to take other actions that would better prevent or reduce long-term losses to the insurance fund. Regulators also have some discretion in how they enforce PCA restrictions and requirements—they may issue a PCA directive (a formal action that requires an institution to take one or more specified actions to return to required minimum capital standards) or delineate the restrictions and requirements in a new or modified enforcement order, such as a section 8(b) cease-and-desist order. For the cases we reviewed, consistent with our 1996 report, we found that regulators generally implemented PCA in accordance with section 38, the implementing regulations, and their policies and procedures. Regulators used PCA to address capital problems at 18 of 24 institutions we sampled from among those that fell below one of the three lowest PCA capital thresholds (that is, undercapitalized, significantly undercapitalized, or critically undercapitalized based on Call or Thrift Financial Report data). (See table 3.) In each of the 18 cases in which regulators used PCA to address capital deficiencies, the relevant regulator identified the institution as having fallen below one of the three lowest PCA capital thresholds and in most cases required the institution to address deficiencies through a capital restoration plan or a PCA directive or other enforcement order. Regulators’ use of PCA is illustrated by the following examples: From the end of March 2002 to the end of June 2002, Rock Hill Bank and Trust’s capital level declined from well capitalized to critically undercapitalized. In response, FDIC issued a notice informing the bank of the restrictions applicable to critically undercapitalized institutions under section 38. Within approximately 2 months of first becoming critically undercapitalized, the bank entered into a purchase and assumption agreement with another institution. Federal Reserve examiners required Federal Reserve Open Bank 2 to submit a capital restoration plan more than a year and a half prior to the bank’s failure to meet minimum capital requirements. Federal Reserve examiners, prepared to issue a PCA directive when the bank’s capital fell to significantly undercapitalized in March 2005, noted in a June 2005 report of examination that the bank had taken steps to raise its capital level to undercapitalized, and then issued a PCA directive requiring the bank to submit a capital restoration plan. By September 2005, the bank was well capitalized by PCA standards. OCC examiners notified First National Bank (Lubbock) of its critically undercapitalized status shortly after the closing date of the bank’s June 30, 2003, Call Report filing. In November 2003, the bank was sold to a bank holding company and recapitalized. Concurrent with the bank’s June 30, 2004, Call Report filing date, OCC conducted a full-scope examination and found the bank to be critically undercapitalized and directed it to file a capital restoration plan. The bank merged into an affiliate in early 2005, in accordance with its capital restoration plan. After Enterprise FSB’s capital level declined to undercapitalized in September 2001, OTS issued a PCA directive that required the institution to submit a capital restoration plan and make arrangements to sell or merge with another institution. On several occasions, OTS modified its original PCA directive to allow additional time to process the institution’s merger application. With the exception of one quarter in which Enterprise FSB’s capital level increased to well capitalized, the institution remained undercapitalized until the merger was completed in early 2003. Regulators said that PCA was most effective when it was used to close or require the sale or merger of institutions as a means of minimizing or preventing losses to the insurance fund. Fifteen of the 18 institutions we reviewed were able to recapitalize or merged or closed without losses to the insurance fund. The remaining three institutions failed with losses to the insurance fund: Pulaski Savings Bank ($1 million), New Century Bank ($5 million), and Southern Pacific Bank ($93 million). The failure of Southern Pacific Bank resulted in material losses to the insurance fund. In its material loss review for the bank, the FDIC IG noted that even though FDIC examiners applied PCA in accordance with regulatory guidelines, other factors, including the bank’s failure to abide by FDIC recommendations related to the administration of its loan program, resulted in an overstatement of both net income and capital and limited PCA’s effectiveness in minimizing losses to the insurance fund. In our review of FDIC’s reports of examination and other information for the bank, we found that FDIC examiners continually informed the bank of its capital status and made repeated requests to management to recapitalize. However, the bank’s reported capital level never fell to critically undercapitalized—the point at which FDIC has the authority to close an institution under section 38. In 6 of the 24 sampled cases we reviewed, we determined that use of PCA was not required to address declines in capital reported on quarterly Call and Thrift Financial Reports (see table 4). Although PCA requires regulators to take regulatory action when an institution fails to meet established minimum capital requirements, capital is a lagging indicator and thus not necessarily a timely predictor of problems at banks and thrifts. Although capital is an essential and accepted measure of an institution’s financial health, it does not typically begin to decline until an institution has experienced substantial deterioration in other areas, such as asset quality and the quality of bank management. As a result, regulatory actions focused solely on capital may have limited effects because of the extent of deterioration that may have already occurred in other areas. All four regulators generally agreed that by design, PCA is not a tool that can be used upon early recognition of an institution’s troubled status—in all of the cases we examined, regulators took steps, in addition to PCA, to address institutions’ troubled conditions. For example, 12 of the 18 banks and thrifts subject to PCA that we examined experienced a decline in their CAMELS ratings to composite ratings of 4 or 5 prior to or generally concurrent with becoming undercapitalized. CAMELS ratings measure an institution’s performance in six areas—capital, asset quality, management, earnings, liquidity, and sensitivity to market risk. These ratings are a key product of regulators’ on-site monitoring of institutions, providing information on the condition and performance of banks and thrifts, and can be useful in predicting their failure. The FDIC IG found a similar trend among the banks it examined as part of an evaluation of FDIC’s implementation of PCA. All of the 18 institutions we examined also appeared on at least one of three regulator watch lists—the FDIC problem institutions list, the FDIC resolution cases list, and the FDIC projected failure list—prior to or concurrent with becoming undercapitalized (see fig. 6). Regulators use these and their own watch lists to monitor the status of troubled institutions and, in some cases, ensure their timely resolution (that is, facilitating the merger or closure of institutions to prevent losses to the insurance fund); the lists were another means through which regulators monitored and addressed problems or potential problems at the 18 institutions prior to declines in PCA capital categories. Consistent with banks and thrifts exhibiting declining CAMELS ratings and appearing on one or more watch lists prior to or concurrent with becoming undercapitalized, at least 15 of the 18 banks and thrifts that we reviewed had informal or formal enforcement actions in place prior to becoming undercapitalized. Although we did not examine the effectiveness of these prior actions in addressing deficiencies, the following examples illustrate the types and numbers of enforcement actions regulators took at some of the institutions in our sample. Although FDIC Open Bank 1 and FDIC examiners disagreed over the bank’s capital status, FDIC required the bank’s board of directors to execute a board resolution to address certain safety and soundness deficiencies identified as part of an examination (see fig. 7). When the bank failed to adequately address the identified deficiencies, FDIC issued a cease-and-desist order. When New Century Bank opened in July 1999, the Federal Reserve, the state regulator, and FDIC all required the bank to maintain capital in excess of the PCA well-capitalized minimums to obtain a state charter and FDIC insurance (see fig. 8). Throughout its existence, the bank not only failed to maintain these capital levels, but also failed to remain adequately capitalized by PCA standards. The Federal Reserve attempted to address these capital and other safety and soundness deficiencies through PCA directives and other formal enforcement orders. When the bank proved incapable of maintaining minimum capital levels, the state regulator closed it and appointed FDIC as receiver. OCC examiners identified Compubank as posing serious safety and soundness concerns related to earnings when the bank was well capitalized by PCA standards (see fig. 9). The bank had high operating losses because of high overhead expenses caused by expanding operations in anticipation of high growth. As a result, OCC required the bank to enter into a written agreement, which stipulated that the bank implement a capital restoration plan and develop a contingency plan to sell, merge, or liquidate. Five months later, the bank reported that it was critically undercapitalized by PCA standards. The bank began the self-liquidation process and closed in June 2002. Approximately 5 months before Georgia Community Bank became undercapitalized, OTS and the institution entered into a supervisory agreement in response to regulator concerns about the institution’s asset quality and management (see fig. 10). When the institution reported it was significantly undercapitalized, OTS issued a PCA directive; however, the institution was unable to recapitalize and as a result, it merged into another institution in July 2005. We also reviewed material loss reviews of all institutions that failed with material losses to the insurance fund—losses that exceed $25 million or 2 percent of an institution’s assets, whichever is greater—from 1992 through 2005 and in which regulators used PCA to address capital problems (see table 5). In 12 of these 14 cases, the relevant IG found that PCA was applied appropriately—meaning that when institutions failed to meet minimum capital requirements, regulators required that they submit capital restoration plans and adhere to restrictions and requirements in PCA directives or other enforcement orders. Regulators appropriate use of PCA in institutions that failed with material losses are demonstrated by the following examples: According to the FDIC IG’s material loss review on Connecticut Bank of Commerce, FDIC used enforcement actions other than PCA directives to address the bank’s capital and other problems. Connecticut Bank of Commerce experienced capital deficiencies from 1991 through 1996 as a result of its poor asset quality. The bank operated under several cease-and- desist orders (1991, 1993, and 2001) and a memorandum of understanding, each of which contained requirements that the bank hold capital in excess of the required PCA minimums. Upon the detection of fraud in April 2002, the bank’s capital was immediately exhausted and it became critically undercapitalized. On June 25, 2002, FDIC issued a PCA directive ordering the dismissal of the bank’s chairman and president. On June 26, 2002, the Banking Commissioner for the State of Connecticut declared Connecticut Bank of Commerce insolvent, ordered it closed, and appointed FDIC as receiver. Prior to the implementation of legislation implementing PCA, Federal Reserve examiners attempted to restore Pioneer Bank to a safe and sound operating condition through written agreements entered into in 1986 and 1991. Despite these enforcement actions, the bank’s condition continued to deteriorate and in June 1994, the Federal Reserve issued a PCA directive requiring Pioneer Bank to become adequately capitalized though the sale of stock or to be acquired by or merge into another institution. When the bank was unable to comply with the terms of the PCA directive, the California State Banking Department issued a capital impairment order on July 6, 1994, and closed the bank on July 8, 1994. In its material loss review of Pioneer Bank, the Federal Reserve IG concluded that the level of supervisory actions taken by the Federal Reserve was within the range of acceptable actions for the problems the bank experienced. In October 2001, NextBank’s capital level dropped from well capitalized to significantly undercapitalized based on findings from an examination conducted by OCC’s Special Supervision and Fraud Division. The Department of the Treasury (Treasury) IG noted in its material loss review that the bank was at that point automatically subject to restrictions under PCA. In November 2001, OCC issued a PCA directive requiring the bank, among other things, to develop a capital restoration plan; file amended Call Reports; restrict new credit card account originations to prime lenders; and restrict asset growth, management fees, and brokered deposits. By December 2001, NextBank advised OCC that it would not be able to address its capital deficiency. In January 2002, NextBank and its parent company took steps to liquidate the bank. OCC appointed FDIC as receiver on February 7, 2002. While the Treasury IG did not find fault with OCC’s use of PCA to address NextBank’s capital deficiencies, it found that PCA’s effectiveness in NextBank’s situation was difficult to assess given the short amount of time that passed between when the bank’s capital declined below PCA minimum requirements and when the bank failed. In two cases, the relevant IG determined that the regulator’s use of PCA was not appropriate—First National Bank of Keystone (Keystone) and Superior Bank, regulated by OCC and OTS, respectively. In both cases, the Treasury IG found that the regulator failed to identify the institution’s true financial condition in a timely manner and thus could not apply PCA’s capital-based restrictions because the institution’s reported capital levels met or exceeded the minimum required levels. Because PCA was not implemented timely in these cases, it was not effective in containing losses to the deposit insurance fund. According to the Treasury IG, Keystone’s operating strategy entailed growth into the high-risk areas of subprime lending and selling loans for securitization. The bank’s growth in these areas occurred without adequate management systems and controls, and inaccurate financial records masked the bank’s true financial condition. At the time of the bank’s failure, allegations of fraud were under investigation. In its material loss review of the bank, the IG noted that if OCC had reclassified the bank’s capital category from well capitalized to adequately capitalized following an examination in late 1997, OCC could have restricted the bank’s use of brokered deposits and applied certain interest-rate restrictions in an effort to curb the bank’s growth 6 months before its capital levels showed serious signs of decline. Instead, these restrictions were not put in place until June 1998 when OCC required the bank to adjust its reported capital based on examination findings—this adjustment resulted in a downgrade in the bank’s capital category from well capitalized to undercapitalized and trigged PCA restrictions. Despite this finding, the IG noted that it was unclear whether reclassification would have actually had its desired effect—after the restrictions were trigged in June 1998, the bank continued to intentionally violate them. The Treasury IG’s material loss report on Superior Bank notes that while the immediate causes of the bank’s insolvency in 2001 appeared to be improper accounting and inflated valuations of residual assets, the causes could be attributed to a confluence of factors going back as early as 1993, including asset concentration, rapid growth into a new high-risk activity, deficient risk management systems, liberal underwriting of subprime loans, unreliable loan loss provisioning, economic factors affecting asset valuation, and lack of management response to supervisory concerns. Our 2002 testimony on the failure of Superior Bank and the IG’s material loss review suggested that had OTS acknowledged problems at Superior Bank when examiners became aware of them in 1993, PCA would have been triggered sooner and might have slowed the bank’s growth and contained its losses to the deposit insurance fund. The IG further noted that OTS’s delayed detection of so many critical problems suggests that the advantage of PCA as an early intervention tool depends as much on timely supervisory detection of actual, if not developing, problems as it does on capital. Under section 38 regulators have the ability to reclassify an institution’s capital category and dismiss officers and directors from deteriorating banks and thrifts. However, regulators have made limited use of these authorities, preferring instead to use moral suasion (as part of or separate from the examination process) or other enforcement actions to address deficiencies. Under section 39, regulators can require institutions to implement plans to address deficiencies in their compliance with regulatory safety and soundness standards. Regulators have used section 39 with varying frequency to address noncapital deficiencies; however, those that use the provision use it to address targeted deficiencies, such as noncompliance with certain laws or requirements, and when an institution’s management generally is willing and able to comply with required corrective actions. In addition to their authority under PCA to reclassify an institution’s PCA capital category or require improvements in management at significantly undercapitalized institutions, regulators also can use other means—such as moral suasion or more formal enforcement actions—to address deficiencies or effect change at an institution. Under section 38(g), regulators have the authority to reclassify or downgrade an institution’s PCA capital category to apply PCA restrictions and requirements in advance of a decline (or further decline) in capital if the regulator determines that the institution is operating in an unsafe or unsound manner or engaging in an unsafe or unsound practice. Regulators also may treat an undercapitalized institution as if it were significantly undercapitalized if they determine that doing so is “necessary to carry out the purpose” of PCA. In practice, this means that regulators may, in certain circumstances, treat a well-capitalized institution as if it were adequately capitalized, an adequately capitalized institution as if it were undercapitalized, and an undercapitalized institution as if it were significantly undercapitalized. Regulators are prohibited from reclassifying or downgrading an institution more than one capital category and cannot downgrade a significantly undercapitalized institution to critically undercapitalized. Regulators also may require improvements in the management of a significantly undercapitalized institution—for example, through the dismissal of officers and directors. This provision can be used alone or in conjunction with the reclassification provision. In the latter case, a regulator can require the dismissal of officers and directors from an undercapitalized institution. All four regulators said that they generally prefer other means of addressing problems to PCA. According to the regulators, the authority to reclassify an institution’s capital category is of limited use on its own because regulators’ ability to address both noncapital (such as management) and capital deficiencies through other informal and formal enforcement actions prior to a decline in capital effectively negates the need to reclassify an institution to apply operating restrictions or requirements. Regulators’ use of section 38’s reclassification authority is consistent with their views on it—since 1992, FDIC, the Federal Reserve, and OTS have never reclassified an institution’s capital category. OCC has used the authority twice. All four regulators said that section 38’s dismissal authority under section 38(f)(2)(F) is valuable as a deterrent and a potential tool, despite their infrequent use of it—FDIC has used the authority six times since 1992 and OCC once; the Federal Reserve and OTS have never used the authority. They said that the PCA authority occupies the middle ground between moral suasion and the removal and prohibition authority under section 8(e) of FDIA. According to the regulators, the first step in confronting problem officers and directors is moral suasion—that is, reminding the board of directors that it has an obligation to ensure that the institution is competently managed. In many cases, we were told that this reminder often is enough to force the resignations of problem individuals. Dismissal under section 38 represents a “middle of the road” option—it results in a ban from serving as an officer or director in the institution in question. In order to be reinstated, the dismissed individual must demonstrate that he or she has the capacity to materially strengthen the institution’s ability to become adequately capitalized or correct unsafe or unsound conditions or practices. Regulators also have a more severe option—removal under section 8(e), which results in an industrywide prohibition and consequently, requires proof of a high degree of misconduct or malfeasance. Data show that regulators have used section 8(e) with some regularity (see fig. 11). The regulators said that if an individual’s misconduct rises to the level required to support removal and prohibition under section 8(e), use of that authority generally is preferable to dismissal under section 38. The regulators also noted that moral suasion and section 8(e) are not necessarily capital based, meaning that both can be used at times when PCA cannot. The regulators acknowledged that section 38 permits them to reclassify an institution’s capital category to dismiss an officer or director; however, they said that because section 38 only allows them to dismiss individuals from institutions that are undercapitalized or worse by PCA standards, the tool generally is not available to them in these good economic times when all or most of the institutions they regulate are well capitalized. OCC was of the view that section 38’s dismissal authority could be more useful if it were uncoupled from capital and instead triggered by less-than-satisfactory ratings in the management component of the CAMELS rating. In particular, OCC officials said that linking the authority to the CAMELS rating could provide regulators with the authority to dismiss individuals who did not meet the criteria for removal and prohibition under section 8(e) and from institutions with boards that were unresponsive to regulators’ moral suasion. Changes to section 39 in 1994 gave regulators considerable flexibility over how and when to use their authority under the section to address safety and soundness deficiencies at the institutions they regulate. Like section 38’s dismissal authority, section 39 represents a “middle of the road” option between informal enforcement actions (such as a commitment letter) and formal enforcement actions (such as a cease-and-desist order). In varying degrees, they have used section 39 to address deficiencies in the three broad categories defined under the section: operations and management; compensation; and asset quality, earnings, and stock valuation (see fig. 12). Finally, regulators said that they prefer to use section 39 when regulators are certain that management is willing and able to address identified deficiencies, even if management has not been responsive to informal regulatory criticisms in the past. For example, FDIC, OCC, and OTS have all used section 39 to require institutions to achieve compliance with Year 2000 (Y2K) or Bank Secrecy Act (BSA) requirements (both of which relate to institutions’ operations). Officials from the Federal Reserve told us that they use memorandums of understanding in the same way that the other three regulators use section 39—that is, to address targeted deficiencies at institutions that are willing and able to make required changes. According to the regulators, formal enforcement actions, such as section 8(b) cease-and-desist orders or written agreements, are better reserved for institutions that have multiple or complex problems and in cases where management is unable to define what steps must be taken to address problems independent of the regulator or is unwilling to take action. Since 1995 (the year regulators issued the section 39 guidelines), regulators have made frequent use of section 8(b) of FDIA to address problems associated with operations and management; compensation; and asset quality, earnings, and stock valuation. From 1995 through 2005, FDIC and the Federal Reserve issued 288 and 98 cease-and-desist orders or written agreements, respectively, to address deficiencies in these three areas. OTS issued 47 cease-and-desist orders related to deficiencies in operations. Under authority provided by the Federal Deposit Insurance Reform Act of 2005, FDIC now prices its deposit insurance more closely to the risk FDIC officials judge an individual bank or thrift presents to the insurance fund. To do this, FDIC has created a system in which it evaluates a number of financial and regulatory factors specific to an individual bank or thrift. This replaces a system that was also risk based, but which differentiated risk less finely. Industry officials and academics to whom we spoke and selected organizations that submitted comment letters to FDIC generally supported the concept of the new system. However, several voiced concern about what they saw as the new system’s subjectivity and complexity and questioned whether the new system might produce unintended consequences, including upsetting relations between bankers and their regulators. FDIC’s recent changes to the deposit insurance system more closely tie an individual bank or thrift’s deposit insurance premium to the risk it presents to the insurance fund. In general, FDIC does this by considering three sets of factors—supervisory (CAMELS) ratings and financial ratios or credit agency ratings—while also distinguishing between large institutions with credit agency ratings and all other institutions. However, the system stops short of completely risk-based pricing. FDIC’s previous method for determining premiums relied on two factors— capital levels and supervisory ratings—to determine institutions’ risk and premiums. FDIC established three capital groups—termed 1, 2, and 3 for well-capitalized, adequately capitalized, and undercapitalized institutions, respectively—based on leverage ratios and risk-based capital ratios. Three supervisory groups—termed A, B, and C—reflected, respectively, financially sound institutions with only a few minor weaknesses; institutions with weaknesses, which if not corrected could result in significant deterioration and increased risk of loss to the insurance fund; and institutions that pose a substantial probability of loss to the insurance fund unless effective corrective action is taken. Based on its capital levels and supervisory ratings, an institution fell into one of nine risk categories (see table 6). However, the vast majority of institutions—95 percent at year-end 2005—fell into category 1A, even though, according to FDIC officials, there were significant differences among individual institutions’ risk profiles within the category. Further, according to FDIC, in 2005, 95 percent of institutions did not pay premiums into the insurance fund because the agency was barred from charging premiums to well-managed and well-capitalized institutions when the deposit insurance fund was at or above its designated reserve ratio, and was expected to remain there. Because nearly all institutions paid the same rate under the old system, lower-risk institutions effectively subsidized higher-risk institutions. To tie institutions’ insurance premiums more directly to the risk each presents to the insurance fund, FDIC created a system that generally (1) differentiates between large and small institutions, specifically between institutions with current credit agency ratings and $10 billion or more in assets and all other institutions; (2) for institutions without credit agency ratings, forecasts the likelihood of a decline in financial health (referred to throughout this report as the general method); (3) for institutions with credit agency ratings, uses those ratings, plus potentially other financial market information, to evaluate institutional risk (referred to throughout this report as the large-institution method); and (4) requires all institutions to pay premiums based on their individual risk. Premiums under the general method and the large-institution method are calculated differently, based on the availability of relevant information for institutions in each category. The general method uses two sources of information as inputs to a statistical model designed to predict the probability of a downgrade in an institution’s CAMELS rating: (1) financial ratios (such as an institution’s capital, past-due loans, and income) and (2) CAMELS ratings. According to FDIC officials, little other information is readily available to assess risk for these institutions. However, FDIC data show that the higher on the CAMELS scale institutions are rated, the higher the rate of failure—the 5-year failure rate is 0.39 percent for CAMELS 1-rated banks, 3.84 percent for 3-rated banks, and 46.92 percent for 5-rated banks—thus making CAMELS ratings and financial ratios a reasonable basis for assessing risk. The large-institution method also uses CAMELS ratings. But rather than employ financial ratios, it incorporates market-based information—credit agency ratings of an institution’s debt offerings. FDIC officials told us that incorporating debt ratings provides a fuller, market-based picture of an institution’s condition than do financial ratios. For example, some large institutions concentrate in certain activities, such as transactions processing or credit cards, while others provide more general services. According to FDIC officials, financial ratios may not adequately distinguish among such different activities. Also, credit ratings determine how much institutions must pay to obtain funds in capital markets—well- rated banks and thrifts will pay less, while institutions the market judges as riskier will pay more. Thus, according to FDIC officials, it makes sense to align premiums with these market-based funding costs. In addition to its ability to use the CAMELS and credit ratings, FDIC also has the flexibility to adjust premiums for large institutions up to 0.5 basis points up or down based on other relevant information (such as market analyst reports, rating-agency watch lists, and rates paid on subordinated debt) as well as stress considerations (such as how an institution would be expected to react to a sudden and significant change in interest rates). If a large institution does not have an available credit agency rating, its premium is calculated according to the general method. The new insurance system places banks and thrifts into one of four risk categories, each of which has a corresponding premium or range of premiums. These “base rate” premiums range from 2 to 4 basis points for banks and thrifts in the best-rated category, risk category I, to 40 basis points for institutions in the bottom category, risk category IV (see table 7). Thus, for example, under the base rate schedule the riskiest institutions (risk category IV) pay a premium rate 20 times greater than the best-rated banks and thrifts (minimum rate, risk category I). Even within the best category, riskier institutions pay twice the rate paid by the safest banks and thrifts, reducing the tendency for subsidies under the old system. The same premium schedule applies to all institutions, regardless of their premium assessment method. Under the new system, FDIC has limited authority, without resorting to new rule making, to vary premiums from the base rates as necessary and appropriate. For assessments beginning in 2007, FDIC has used this flexibility to increase premiums by 3 basis points over the base rates. Thus, the current rate for risk category I is 5 to 7 basis points, rather than 2 to 4 basis points; for risk category II, the premium is 10 basis points; for risk category III, the premium is 28 basis points; and for risk category IV, the premium is 43 basis points. According to FDIC, the increase in premiums for 2007 was necessary because of strong growth in insured deposits and the availability of premium credits to many institutions under the terms of the Federal Deposit Insurance Reform Act of 2005. In general, to set the premium rates for each of the four risk categories, FDIC officials told us they considered both what the differences should be in premiums among risk categories and, taking those differences into account, the level at which the premiums should be established. Considering the two together, the goal was to create a schedule of rates with the best chance of maintaining the insurance fund with a designated reserve ratio from 1.15 percent to 1.35 percent of insured deposits, with the former representing the required minimum under the Federal Deposit Insurance Reform Act of 2005, and the latter being the level at which mandated rebates of premiums to banks and thrifts must begin. FDIC officials told us they established the level of premiums based on four factors: (1) historical data on insurance losses, (2) FDIC operating expenses, (3) projected interest rates and their effect on FDIC investment portfolio income, and (4) expected growth of insured deposits. Although the new system ties premiums more specifically to the risk an individual institution presents to the insurance fund, it does not represent completely risk-based pricing. As a result, some degree of cross-subsidy still exists in the new system. In particular, as estimated by FDIC, institutions in risk category IV would need to pay premiums of about 100 basis points to cover the expected losses of the group. However, FDIC has chosen to set the base rate premium for these banks and thrifts at 40 basis points, or 60 percent below the indicated premium. In doing so, FDIC officials told us they sought to address long-standing concerns of the industry, regulators, and others that premiums should not be set so high as to prevent an institution that is troubled and seeking to rebuild its health from doing so. In contrast, some have suggested that capping premiums to address such concerns ultimately may cost the insurance fund more in the long run—lower premiums for riskier institutions may allow them to remain open longer, resulting in greater losses if and when they eventually fail. FDIC officials said that the number of institutions in category IV is small and thus the trade-off between lower premiums for troubled institutions and potentially larger losses later is not significant. Further, they said that the 40 basis point base rate applicable to the highest risk institutions represents a sizable increase over the assessment rate for these institutions under the previous system. Another way FDIC’s new premium pricing system stops short of being completely risk based is that it does not take into account “systemic risk.” In a fully risk-based system, premiums would be set to reflect two major components: expected losses plus a premium for systemwide risk of failure or default. According to academics we spoke to, FDIC’s new system reflects the first component, but not the second. Incorporating the notion of systemic risk into the premium calculation would acknowledge that failure of some banks could have repercussions to the financial system as a whole and that such failures are more likely during economic downturns. FDIC officials told us that the new system does not reflect systemic risk for several reasons. First, there is an alternative mechanism for capturing what is effectively a systemic risk premium. Second, FDIC officials said that charging an up-front premium for systemic risk could prevent institutions from getting the best premium rate on the basis of their size, which is not permitted under the 2005 Federal Deposit Insurance Reform Act. And finally, FDIC officials said that FDIC has other sources of financing available to address losses resulting from large- scale failures, including borrowing from the industry, a $30 billion line of credit with Treasury, and the ability to borrow from the Federal Financing Bank and the Federal Home Loan Bank system. In our review of selected comments to FDIC’s proposed rule and interviews with bankers, industry trade groups, and academics, we found that the industry generally supported the concept of a more risk-based insurance premium system. However, several of those to whom we spoke and many organizations that submitted comments to FDIC raised several concerns about the new system. First, many said that the new system places too much weight on subjective factors, which could result in incorrect assessments of institutions’ actual risk. Specifically, officials from two trade associations and one small bank who we interviewed questioned the inclusion of, or the weight given to, the management component of the CAMELS ratings. One considered this component to be the most subjective of the CAMELS component areas. Six additional organizations noted in comment letters their concern with FDIC’s plan to assign different weights to the CAMELS components, noting in at least one case that FDIC had provided no evidence to support using a weighted rating in place of the composite rating. FDIC officials said that the weights were set in consultation with the other federal banking regulators and represent the relative importance of each component as it pertains to the risk an institution presents to the insurance fund. Specifically, FDIC officials said that asset quality, management, and capital are often key factors in an institution’s failure and any subsequent losses to the insurance fund, and thus warrant more consideration than other factors in the calculation of risk. Similarly, in comment letters to FDIC, five large banks, three trade groups, and one financial services company expressed concern with the part of the rule that gives FDIC flexibility to adjust large institutions’ premiums up or down based on other information, including other market information and financial performance and conditions measures (such as market analyst reports, assessments of the severity of potential losses, and stress factors). All of these organizations cautioned that to do so would undermine the assessments of institutions’ primary regulators regarding their performance and health (as expressed in CAMELS ratings, a primary component of FDIC’s system). According to FDIC, this authority to adjust ratings in consultation with other federal regulators is necessary to ensure consistency, fairness, and the consideration of all available information. FDIC officials said that the agency plans to clarify its processes for making any adjustments to ensure transparency and plans to propose and seek comments on additional guidelines for evaluating whether premium adjustments are warranted and the size of the adjustments. Related to these concerns, officials from one large bank, one small bank, and one trade association and one of the academics with whom we spoke said that FDIC’s new system is overly complicated and that it might not be readily apparent to bank or thrift management how activities at their respective institutions could affect the calculation of their insurance premiums. Seven others expressed similar concerns in comment letters to FDIC. In its final rule, FDIC stated that while the pricing method is complex, its application is straightforward. For example, if an institution’s capital declines, its premium will likely increase. Further, FDIC officials said that the FDIC Web site contains a rate calculator that allows an institution to determine its premium and to simulate how a change in the value of debt ratings, supervisory ratings, or financial ratios would affect its premium. Officials we interviewed from all three of the large banks said that the level and range of premiums for top-rated institutions generally was too high, given the actual risk they believe their institutions pose to the insurance fund. Officials from one large bank and one trade association we spoke with said that the best-rated banks and thrifts should pay no premiums, or that the base rate range of premiums should be reduced from 2 to 4 to 1 to 3 basis points. An additional nine organizations supported similar changes in their comment letters. Risk category I, the top-rated premium category, accounts for the majority of total deposits, meaning that even small changes in premium assessment rates could produce a significant difference in revenue to the insurance fund, and hence assessments to the industry. FDIC officials said that the 2 to 4 basis point spread is more likely to satisfy the insurance fund’s long-term revenue needs than a 1 to 3 basis point spread. FDIC officials also said that FDIC could, based on authority in the final rule, reduce rates below the current base rate “floor” of 2 to 4 basis points if the agency determined that such a reduction was warranted. Further, one bank official we spoke to said that the new system was incorrectly based in the idea of institutions failing, rather than on the more nuanced notion of actual losses expected to be suffered by the deposit insurance fund if failures occurred. As a result, he said, FDIC failed to give appropriate credit to how large banks handle risk. Three organizations that submitted comments on FDIC’s new system supported this notion, saying that FDIC should not assess premiums on all domestic deposits because losses suffered by uninsured depositors should impose no burden on the insurance fund—the magnitude of any loss would be lessened to the extent that depositors in foreign branches, other uninsured depositors, general creditors, and holders of subordinated debt absorbed such losses. FDIC officials, citing research the agency has done on failures and losses, said that the differences in rates and categories were empirically based, and thus adequately reflected all institutions’ risk. Further, FDIC officials said that loss severity is one of the many factors the agency is permitted to consider as part of its assessment of the risk of large institutions. Officials from the two small banks, one large bank, and both industry trade groups and the academics with whom we spoke questioned FDIC’s choice on initial placement of institutions into risk categories. Because most institutions are now healthy, FDIC placed them into the best-rated premium category, risk category I, for which base rate premium charges range from 2 to 4 basis points. Within this top-rated category, FDIC initially assigned approximately 45 percent of institutions to receive the minimum rate of 2 basis points, and 5 percent of institutions to receive the highest rate of 4 basis points. The remainder fell in the middle of the range. These officials and academics generally agreed that FDIC should establish risk criteria, and then assign institutions to appropriate groups based on those criteria, rather than start with a predetermined distribution in mind. Three additional organizations expressed similar concerns in comment letters to FDIC. Further, officials from the other two large banks with whom we spoke said that given the economic good times and institutional good health, the 45 percent of institutions with the lowest rate was too small a grouping and, as a result, healthy institutions arbitrarily would be bumped into higher premiums. FDIC officials said that based on the agency’s experience, a range of 40 to 50 percent appeared to be a natural breaking point in the distribution of institutions by risk, and that over time, the percentage of institutions assigned the lowest premium in the top-rated category may vary. Some also thought the new system had the potential to create tension or discourage cooperative relations between bank management and federal examiners. Under the old system, there was no difference in premiums for well-capitalized, 1-rated institutions and well-capitalized 2-rated institutions. However, under the new system, such a difference could lead to higher premiums because CAMELS ratings are factored into premium calculations. As a result, according to officials we interviewed from one trade group, management might be less willing to discuss with examiners issues or problems that could prompt a lower rating, although raising and resolving such problems ultimately might be good for both the institution and the insurance fund. FDIC officials acknowledged the concern, and said that FDIC and the other federal regulators plan to monitor the new system for adverse effects. However, they said that it was important to include CAMELS ratings in the assessment of risk because the ratings provide valuable information about institutions’ financial and operational health. Finally, officials from one trade association and one of the large banks with whom we spoke also expressed concern that regional or smaller institutions could be disadvantaged under the new system. Officials from two credit rating agencies echoed this view, saying that larger, more diverse institutions (by virtue of factors such as revenue, geography, or range of activities) typically have steadier income, which increases security and decreases risk. In contrast, regional or smaller institutions can have geographic or line-of-business concentrations in their lending portfolios that could hurt supervisory or credit ratings, leading to higher deposit insurance premiums. FDIC said that while size or geography could affect an institution’s risk profile, management could offset that risk by maintaining superior earnings or capital reserves, requiring higher collateral requirements on loans, or using hedging vehicles. FDIC officials told us that the agency plans to monitor the new deposit insurance system to ensure its proper functioning and the fair treatment of the institutions that pay premiums into the deposit insurance fund. For example, in addition to assessing whether the new system creates friction between examiners and bank and thrift management, as discussed above, FDIC officials also said that the agency will, among other things, assess over time whether the percentage of institutions paying the lowest rate in risk category I—those receiving the best premium rate—should be increased and whether different financial ratios should be considered in the calculation of premiums. We provided FDIC, the Federal Reserve, OCC, and OTS with a draft of this report for their review and comment. In written comments, the Federal Reserve concurred with our findings related to PCA. These comments are reprinted in appendix II. The Federal Reserve noted that PCA has substantively enhanced the agency’s authority to resolve serious problems expeditiously and that PCA has generally worked effectively in the problem situations where its use became applicable. In addition, FDIC, the Federal Reserve, and OCC provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Chairmen of the Federal Deposit Insurance Corporation and the Board of Governors of the Federal Reserve System, the Comptroller of the Currency, the Director of the Office of Thrift Supervision, and interested congressional committees. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-8678 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. The objectives of this report were to (1) describe trends in the financial condition of banks and thrifts and federal regulators’ oversight of these institutions since the passage of the Federal Deposit Insurance Corporation Improvement Act of 1991 (FDICIA), (2) evaluate how federal regulators used the capital or prompt corrective action (PCA) provisions of FDICIA to resolve capital adequacy issues at the institutions they regulate, (3) evaluate the extent to which federal regulators use the noncapital provisions of FDICIA to identify and address weaknesses at the institutions they regulate, and (4) describe the Federal Deposit Insurance Corporation’s (FDIC) deposit insurance system and how recent changes to the system affect the determination of depository institutions’ risk and insurance premiums. Our review focused on FDIC, the Board of Governors of the Federal Reserve System (Federal Reserve), the Office of the Comptroller of the Currency (OCC), and the Office of Thrift Supervision (OTS) and was limited to depository institutions. To describe trends in the financial condition of banks and thrifts, we summarized financial data (including total assets, net income, returns on assets, returns on equity, the number of problem institutions, and the number of bank and thrift failures) from 1992, the year FDICIA was implemented, through 2005. We obtained this information from FDIC Quarterly Banking Reports, which publish industry statistics derived from Reports on Condition and Income (Call Report) and Thrift Financial Reports. All banks and thrifts must file Call Reports and Thrift Financial Reports, respectively, with FDIC every quarter. We also analyzed Call and Thrift Financial Report data for 1992 through 2005 that FDIC provided to determine (1) the number of well-capitalized, adequately capitalized, undercapitalized, significantly capitalized, and critically undercapitalized depository institutions from 1992 through 2005 and (2) the amount of capital well-capitalized banks and thrifts carried in excess of the well- capitalized leverage capital minimum for each year from 1992 through 2005. We chose to use Call and Thrift Financial Report data because the data are designed to provide information on all federally insured depository institutions’ financial condition, and FDIC collects and reports the data in a standardized format. We have tested the reliability of FDIC’s Call and Thrift Financial Report databases as part of previous studies and found the data to be reliable. In addition, we performed various electronic tests of the specific data extraction we obtained from FDIC and interviewed FDIC officials responsible for providing the data to us. Based on the results of these tests and the information we obtained from FDIC officials, we found these data to be sufficiently reliable for purposes of this report. To describe federal regulators’ oversight of banks and thrifts since the passage of FDICIA, we reviewed the provisions of the Federal Deposit Insurance Act (FDIA) requiring regulators to conduct annual, on-site, full- scope examinations of depository institutions as well as several GAO and industry reports discussing the federal regulators’ oversight of depository institutions prior to the failures of the 1980s and early 1990s and after the enactment of FDICIA, including their use of PCA to address capital deficiencies. We also obtained data from each of the four federal regulators on the interval between examinations for each year, from 1992 through 2005. We interviewed officials from FDIC, the Federal Reserve, OCC, and OTS to assess the reliability of these data. Based on their responses to our questions, we determined these data to be reliable for purposes of this report. To determine how federal regulators used PCA to address capital adequacy issues at the institutions they regulate, we reviewed section 38 of FDIA, related regulations, regulators’ policies and procedures, and past GAO reports on PCA to determine the actions regulators are required to take when institutions fail to meet minimum capital requirements. We then analyzed Call and Thrift Financial Report data to identify all banks and thrifts that were undercapitalized, significantly undercapitalized, or critically undercapitalized (the three lowest PCA capital categories) during at least one quarter from 2001 through 2005. We chose this period for review based on the availability of examination- and enforcement-related documents and to reflect the regulators’ most current policies and procedures. From the 157 institutions we identified as being undercapitalized or lower from 2001 to 2005, we selected a nonprobablity sample of 24 institutions, reflecting a mix of institutions supervised by each of the four regulators and institutions in each of the three lowest PCA capital categories. We reviewed their reports of examination, informal and formal enforcement actions, and institution-regulator correspondence for a period covering four quarters prior to and four quarters following the first and last quarters in which each institution failed to meet minimum capital requirements to determine how regulators used PCA to address their capital deficiencies. As discussed above, we have tested the reliability of Call and Thrift Financial Report data and found the data to be reliable. To supplement our sample, we also reviewed material loss reviews from 14 banks and thrifts that failed with material losses from 1992 through 2005 and in which regulators used PCA to address capital deficiencies. Because of the limited nature of our sample, we were unable to generalize our findings to all institutions that were or should have been subject to PCA since 1992. To determine the extent to which federal regulators have used the noncapital supervisory actions of sections 38 and 39 of FDIA to address weaknesses at the institutions they regulate, we reviewed regulators’ policies and procedures related to sections 38(f)(2)(F) and 38(g)—the provisions for dismissal of officers and directors and reclassification of a capital category, respectively—and section 39, which gives regulators authority to address safety and soundness deficiencies. We analyzed regulator data on the number of times and for what purposes the regulators used these noncapital authorities. To provide context on the extent of regulators’ use of these noncapital provisions, we also obtained data on the number of times regulators used their authority under section 8(e) of FDIA to remove officers and directors from office and section 8(b) of FDIA to enforce compliance with safety and soundness standards. Based on regulators’ responses to our questions related to these data, we determined the data to be reliable for purposes of this report. Finally, to describe how changes in FDIC’s deposit insurance system affect the determination of institutions’ risk and insurance premiums, we reviewed FDIC’s notice of proposed rule making on deposit insurance assessments, selected comments to the proposed rule, and FDIC’s final rule on deposit insurance assessments. We also interviewed representatives of three large institutions, two small institutions, and two trade groups representing large and small institutions and two academics to obtain their views on the impact of FDIC’s changes to the system. We selected the large institutions based on geographic location and size and the small institutions based on input from the Independent Community Bankers Association on which of its member organizations were familiar with FDIC’s proposed changes to the deposit insurance system. We also interviewed officials from two credit rating agencies on the factors— financial, management, and operational—they consider when rating institutions’ debt offerings. We conducted our work in Washington, D.C., and Chicago from March 2006 through January 2007 in accordance with generally accepted government auditing standards. In addition to the contact named above, Kay Kuhlman, Assistant Director; Gloria Hernandez-Saunders; Wil Holloway; Tiffani Humble; Bettye Massenburg; Marc Molino; Carl Ramirez; Omyra Ramsingh; Barbara Roesmann; Cory Roman; and Christopher Schmitt made key contributions to this report.
The Federal Deposit Insurance Reform Conforming Amendments Act of 2005 required GAO to report on the federal banking regulators' administration of the prompt corrective action (PCA) program under section 38 of the Federal Deposit Insurance Act (FDIA). Congress created section 38 as well as section 39, which required regulators to prescribe safety and soundness standards related to noncapital criteria, to address weaknesses in regulatory oversight during the bank and thrift crisis of the 1980s that contributed to deposit insurance losses. The 2005 act also required GAO to report on changes to the Federal Deposit Insurance Corporation's (FDIC) deposit insurance system. This report (1) examines how regulators have used PCA to resolve capital adequacy issues at depository institutions, (2) assesses the extent to which regulators have used noncapital supervisory actions under sections 38 and 39, and (3) describes how recent changes to FDIC's deposit insurance system affect the determination of institutions' insurance premiums. GAO reviewed regulators' PCA procedures and actions taken on a sample of undercapitalized institutions. GAO also reviewed the final rule on changes to the insurance system and comments from industry and academic experts. In recent years, the financial condition of depository institutions generally has been strong, which has resulted in the regulators' infrequent use of PCA provisions to resolve capital adequacy issues of troubled institutions. Partly because they benefited from a strong economy in the last decade, banks and thrifts in undercapitalized and lower capital categories decreased from 1,235 in 1992, the year regulators implemented PCA, to 14 in 2005, and none failed from June 2004 through January 2007. For the banks and thrifts GAO reviewed, regulators generally implemented PCA in accordance with section 38. For example, regulators identified when institutions failed to meet minimum capital requirements, required them to implement capital restoration plans or corrective actions outlined in enforcement orders, and took steps to close or require the sale or merger of those institutions that were unable to recapitalize. Although regulators generally used PCA appropriately, capital is a lagging indicator and thus not necessarily a timely predictor of problems at banks and thrifts. In most cases GAO reviewed, regulators had responded to safety and soundness problems in advance of a bank or thrift's decline in required PCA capital levels. Under section 38 regulators can take noncapital supervisory actions to reclassify an institution's capital category or dismiss officers and directors from deteriorating institutions, and under section 39 regulators can require institutions to implement plans to address deficiencies in their compliance with regulatory safety and soundness standards. Regulators generally have made limited use of these authorities, in part because they have chosen other informal and formal actions to address problems at troubled institutions. According to the regulators, other tools, such as cease-and-desist orders, may provide more flexibility than those available under sections 38 and 39 because they are not tied to an institution's capital level and may allow them to address more complex or multiple deficiencies with one action. Regulators' discretion to choose how and when to address safety and soundness weaknesses is demonstrated by their limited use of section 38 and 39 provisions and more frequent use of other informal and formal actions. Recent changes to FDIC's deposit insurance system tie the premiums a bank or thrift pays into the insurance fund more directly to the estimated risk the institution poses to the fund. In the revised system, FDIC generally (1) differentiates between larger institutions with current credit agency ratings and $10 billion or more in assets and all other, smaller institutions and (2) requires all institutions to pay premiums based on their individual risk. Most bankers, industry groups, and academics GAO interviewed and many of the organizations and individuals that submitted comment letters to FDIC on the new system generally supported making the system more risk based, but also had some concerns about unintended effects. FDIC and the other federal banking regulators intend to monitor the new system for any adverse impacts.
Health care providers submit claims to the Medicare program in order to receive payment for services provided to beneficiaries. Financial limits known as therapy caps are one tool used to better manage spending on outpatient therapy services. Congress directed CMS, beginning in 2006, to establish an exceptions process for beneficiaries in need of services above the therapy caps. Since the program was created in 1965, CMS has administered Medicare through private contractors, currently known as MACs. The MACs are responsible for reviewing and paying claims in accordance with Medicare policy, and conducting provider outreach and education on correct billing practices. The MACs process more than 1.2 billion claims per year (the equivalent of 4.5 million claims per work day). The MACs use electronic payment systems, and they transfer any claims submitted on paper into electronic format for processing. The computer systems that the MACs use for processing and paying claims execute automated prepayment “edits,” which are instructions programmed into the system software to identify errors on individual claims and to prevent payment of incomplete or incorrect claims. The system edits also help ensure that payments are made only for claims submitted by appropriate providers for medically necessary goods or services covered by Medicare for eligible individuals. Edits may result in automatic rejection of claims due to missing information or data errors, or in payment denial for ineligible services. In addition to this automated process, the MACs may conduct MMRs when they are unable to determine whether the services provided were medically necessary on the basis of the information on the claim. The MACs solicit documentation of medical necessity from the provider by issuing an additional documentation request (ADR) for the medical records associated with a service; providers are required to submit the records to the MACs within 45 days. Upon receipt, the MMRs are performed within 60 days by licensed health care professionals. Providers and beneficiaries may appeal denials of services that are based on these reviews. Manual reviews can be conducted either before or after a claim is paid and are referred to as prepayment or postpayment reviews, respectively. CMS reports that although the MACs have the authority to review any claim at any time, the volume of claims prohibits manual review of most claims. In general, CMS directs the MACs to focus their MMRs on program integrity efforts targeting payment errors for services and items that pose the greatest financial risk to the Medicare program. We have previously reported that, overall, less than 1 percent of Medicare’s claims are subject to medical record review by trained personnel. Medicare spending for outpatient therapy has increased from $1.3 billion in 1999 to $5.7 billion in 2011. (See fig. 1.) During this 12-year period, mean per user spending on outpatient therapy grew threefold from about $400 to almost $1,200. In 2011, about 80 percent of the 4.9 million Medicare beneficiaries who used OT and PT/SLP did not exceed the annual cap of $1,870. Twenty percent of the Medicare beneficiaries using outpatient therapy (about 980,000 individuals) exceeded the cap that year and spent, on average, $3,000 on outpatient therapy. Therapy provided in nursing homes and private practice offices accounted for over 70 percent of outpatient therapy services in 2011, with the remaining services being provided in hospital outpatient departments and outpatient rehabilitation centers, and by home health agencies. In addition, studies have found that utilization of outpatient therapy services is not evenly distributed across the country. For example, in 2010, the HHS OIG reported on 20 counties with spending per beneficiary 72 percent higher than the national average. MedPAC’s analysis of outpatient therapy claims data from 2011 showed that average per- beneficiary spending varied widely by county, ranging from $406 to $3,582. The therapy caps that were first imposed in 1999 to control spending growth raised concern that patients with extensive need of outpatient therapy services would be affected adversely, and the caps were only in effect in 1999 and for part of 2003 due to a series of temporary congressional moratoria. As implemented in 2006, when the moratoria expired, Congress required CMS to implement a process to allow exceptions to the caps for certain medically necessary services. This exceptions process allowed for two types of exceptions. The first was an automatic exception for certain conditions or complexities, such as hip and knee replacements. The second—called a manual exceptions process by CMS— was a preapproval process whereby a provider could submit a letter and supporting documentation requesting an exception— called a preapproval request—for up to 15 days of treatment above the annual cap, which would be manually reviewed by the MAC. If the services qualified for either an automatic or a manual exception, CMS guidance instructed the provider to include a “KX” modifier on each line of the resulting claim that contained a service above the cap. This modifier represented the provider’s attestation that the services rendered were medically necessary, and it triggered an exception in the Medicare claims processing system, which ensured payment for those outpatient therapy services above the cap. An automatic exceptions process for claims with a KX modifier was extended through 2012 for claims over the annual cap of $1,880, with manual reviews required for claims above the threshold of $3,700. The American Taxpayer Relief Act of 2012 extended the Medicare therapy caps exceptions process, including the requirement for the manual review of claims over $3,700, through December 31, 2013. According to CMS, in 2012, claims for services above the $1,880 cap without a KX modifier or above the $3,700 threshold were considered a benefit category denial, making the beneficiary liable for payment. To protect beneficiaries from unexpected liability for payment of denied claims above the threshold, CMS gave providers the option to send beneficiaries an Advance Beneficiary Notice of Noncoverage (ABN) informing them that Medicare might not pay for an item or service and that they might be liable for payment. An ABN enables the beneficiary to make an informed decision about whether to get services and accept financial responsibility for those services if Medicare does not pay. CMS implemented two types of MMRs during the last 3 months of 2012— reviews of preapproval requests and reviews of claims submitted without preapproval. CMS did not issue complete guidance at the start of the MMR process, causing implementation challenges for the MACs, and the MACs were unable to fully automate systems for tracking the reviews of preapproval requests in the time allotted. CMS implemented two types of MMRs during the last 3 months of 2012— reviews of preapproval requests and reviews of claims submitted without preapproval. First, CMS directed the MACs to manually review preapproval requests for outpatient therapy services above the $3,700 threshold—one for OT and one for PT/SLP combined—before the services were provided. Providers were permitted to request up to 20 days of treatment up to 15 days before providing medically necessary outpatient therapy services above $3,700. In contrast to the MMR process as implemented in 2006, CMS guidance did not allow any automatic exceptions for certain conditions; the MACs had to manually review preapproval requests for any services above $3,700. CMS officials told us that they included the preapproval request process in 2012 in order to help protect beneficiaries from being held liable for payment of claims not affirmed by the MMR, as the process would give the provider and beneficiary guidance as to whether the MACs would affirm or not affirm payment for the requested outpatient therapy services. In order to manage the expected volume of preapproval requests submitted to MACs at the start of the MMR process, CMS divided all outpatient therapy providers among three phases, based primarily on their past billing practices. Providers were instructed to submit preapproval requests during their assigned phase. CMS assigned providers with the highest average billing per patient for outpatient therapy services in 2011 to the first phase, which began on October 1, 2012. According to CMS, these high billers accounted for approximately 25 percent of all outpatient therapy providers and were subject to MMR for the full 3 months of the MMR process during 2012. The second phase began on November 1 and included providers with the next highest billing (also about 25 percent of the total number of providers). The third phase, which included the remaining 50 percent of outpatient therapy providers, generally the lowest billers, began on December 1. CMS officials explained that providers with historically low billing were less likely to have patients who would reach the threshold. CMS also included providers identified by law enforcement or the HHS OIG as being involved in active fraud investigations in the third phase. CMS officials stated that they did not include these providers until the third phase to avoid conflicts with ongoing investigations. As of December 1, all outpatient therapy providers were included in the MMR process. CMS notified providers about the preapproval request process and assignment of phases by letter and provided further information through three conference calls and additional agency communications. CMS instructed providers to submit preapproval requests by mail or fax, including key information such as provider and beneficiary identification numbers as well as supporting documentation including treatment notes and progress reports. CMS also instructed the MACs to post guidelines on their websites to educate providers about these requirements. In addition, CMS sent letters in mid-September 2012 to all Medicare beneficiaries who had received therapy services totaling over $1,700 by that date informing them that they might have to pay for services over the cap should the MACs determine that the services were not medically necessary. To expedite the preapproval process, CMS instructed the MACs to review preapproval requests within 10 business days of receipt of all requested documentation to determine whether the services were medically necessary. After reviewing the requests, the MACs were required to notify providers and beneficiaries of the number of treatment days affirmed or provide detailed reasons for not affirming a request. In addition, CMS instructed the MACs to automatically approve any requests they were unable to review within 10 business days. The MACs had to inform providers of their decisions by telephone, fax, or letter, and postmark all letters by the 10th day after receipt of all requested documentation. Providers were allowed to resubmit nonaffirmed requests with additional documentation for consideration by the MAC, at which point the MAC would have another 10 days within which to review the new request. (See fig. 2.) Second, CMS instructed the MACs to develop a mechanism for tracking preapproval requests in order to match the requests with submitted claims. Because preapproval requests were received by fax or mail, not through the automated claims payment systems, the MACs had to manually match the claim with the corresponding preapproval request. If the services included on the claim matched those affirmed during the preapproval process, the MAC would pay the claim; if not, the MAC would issue an ADR for the medical records associated with the services and conduct further manual review, which could extend the review process more than 3 months. The MACs were also required to manually review submitted claims before providing payment for therapy services provided above $3,700 without a preapproval request. Effective for dates of service on or after October 1, 2012, CMS required the MACs to implement an edit in part of the claims processing system to stop claims that reached the $3,700 threshold and to trigger MMRs by the MACs. To manually review claims without preapprovals, the MACs requested and reviewed supporting documentation from providers to determine whether the services were medically necessary. As with typical prepayment manual reviews, providers had 45 days to provide documentation of medical necessity, and the MACs had 60 days to review the supporting materials and notify providers and beneficiaries of their decisions. (See fig. 2.) If a MAC requested additional documentation, the review time frames would begin again. In contrast to preapproval request decisions, the MACs’ claims payment systems automatically send letters notifying providers and beneficiaries of payment determinations. The MACs did not receive complete CMS guidance before the start of the 3-month MMR process regarding how the MACs should manage incomplete preapproval requests, how they should count the 10-day review time frame, and how they should handle preapproval requests received in the wrong phase. In addition, the MACs did not have enough time to fully automate systems for tracking and processing preapproval requests before the start of the MMR process. CMS did not issue complete guidance at the start of the MMR effort and changed the process throughout the 3-month period, which created implementation challenges. CMS provided instruction to the MACs through various forms of written guidance, as well as twice-weekly conference calls beginning in August 2012. However, CMS did not issue instructions on how the MACs should conduct MMRs of preapproval requests until August 31, 2012. The MACs we interviewed stated that receiving this guidance 1 month before the October 1st start of the MMR process made it difficult for them to adequately prepare and establish systems for reviews of preapproval requests. For example, one MAC said that because of the short turnaround time for implementation, it was not prepared for the high volume of preapproval requests received in the early weeks of the process, which caused it to approve requests without reviewing them. Another stated that it could have better managed the volume of preapproval requests received if it had more time to develop needed support systems. This late guidance also made it difficult for the MACs to train temporary staff assigned to the MMR process in a timely way; two MACs noted that they were still training temporary staff in October, after the start of the process, and one added that this made it difficult to manage the volume of preapproval requests received in October. Further, CMS did not provide guidance on how the MACs should process incomplete preapproval requests, which accounted for approximately 23 percent of the total requests submitted, until November 7, 2012. CMS officials told us they did not initially issue such guidance because they did not anticipate receiving a high volume of incomplete submissions. As a result, the MACs handled incomplete requests in different ways. For example, one MAC held incomplete requests—as many as several thousand—as pending without making a determination or providing a response to providers and beneficiaries within 10 business days. Another initially determined that incomplete requests would be rejected and returned to the provider for additional information. In addition, CMS did not initially issue clear instruction about how the MACs were to count the 10-day time frame for provider and beneficiary notification, which may have caused notification delays. CMS initially instructed the MACs to make decisions on preapproval requests and inform providers and beneficiaries of their decisions within 10 business days of receipt of all requested documentation, and to automatically approve requests they were unable to review within 10 days. The MACs stated that they were unclear, however, about how to count the 10-day time frame. On November 7, 2012, CMS clarified that the count was to begin on the day the MAC received the preapproval request in its mailroom, not in its MMR department. The MACs we interviewed stated that they received a large volume of requests per day—at times several hundred. In addition, two noted that providers often sent in additional supporting documentation for prior requests, which added to the volume of paper files the MACs had to manage and may have created a further lag between when the complete requests were received and when the paperwork was given to MMR staff for review. Before CMS issued this clarifying guidance, providers and beneficiaries may have experienced a longer wait time than expected if a MAC counted the 10 days beginning when the MMR department, rather than the mailroom, received the completed requests. Finally, CMS did not initially provide the MACs with instructions about how to handle preapproval requests and claims submitted in the wrong phase. In its written guidance issued on August 31, 2012, CMS instructed the MACs that they should not review preapproval requests any sooner than 15 days before the start of each phase for providers within that phase, but did not clarify whether requests received out of phase should be rejected and returned to providers, not affirmed, or held as pending until the start of the phase. As a result, one of the MACs we spoke with stated that it initially held requests received out of phase to be processed in the correct phase, but later in the process began rejecting such requests. CMS and the three MACs we interviewed reported challenges with processing preapproval requests because they were not able to fully automate systems to receive and track them in the time allotted. MACs typically conduct either prepayment or postpayment reviews after claims have been submitted; they do not typically receive or conduct MMRs of preapproval requests before the provision of services. All three MACs interviewed told us that MMRs of preapproval requests were more time- consuming and cumbersome because they had to process them outside of their claims processing systems. In addition, all three MACs told us they suspended some of their other medical review efforts in order to implement the mandated outpatient therapy MMRs. For example, the MACs we interviewed explained that they typically use automated edits in their claims processing systems to flag claims for prepayment review in areas identified to be at higher risk for improper payments, such as certain billing codes or service areas, but told us they turned off some other outpatient therapy edits while conducting the mandated MMRs. All three MACs interviewed said that it was difficult to develop fully automated systems for processing preapproval requests at the start of the 3-month process. Two noted that they would have required several months to develop the type of automated systems that, integrated with their regular claims processing systems, would have enhanced the efficiency and accuracy of their MMR efforts. However, CMS did not issue written guidance until August 31, 2012, instructing the MACs to develop processes for receiving and tracking preapproval requests. The MACs we interviewed adapted their systems to manage the preapproval process in different ways with varying degrees of automation. Two of the MACs received requests by fax, scanned the requests and supporting documents, and saved them electronically by date or other identification numbers for tracking. One of these MACs also developed a database in which it manually entered and tracked its MMR decisions, which MAC staff then manually searched to match with submitted claims. The other, however, stated that it did not have time to establish such a database, and conducted reviews without any automation. A third MAC received all requests by mail and developed a database in which it entered its preapproval decisions. This MAC also developed an electronic edit in its claims processing system that tracked incoming therapy claims so they could be processed according to the preapproval decision. Though this MAC was able to automate this step in the preapproval process, staff explained that they were still in the process of testing the edit after the start of the MMR process, and continued to address system errors until December. CMS officials estimate that preapproval requests and claims for over 115,000 Medicare beneficiaries were subject to approximately 167,000 MMRs conducted by the MACs as of March 1, 2013. Delays in claim submissions and pending appeals create uncertainty about the final outcomes of the 2012 MMR process. CMS staff estimated that the MACs manually reviewed more than 167,000 preapproval requests and claims without preapprovals for outpatient therapy from October 1, 2012, through December 31, 2012, affecting more than 115,000 Medicare beneficiaries. Of these MMRs, an and 57,000 were for estimated 110,000 were for preapproval requestsclaims for services that were not preapproved. Of the estimated 110,000 preapproval requests reviewed, the MACs affirmed 80,500 (73 percent) and did not affirm 29,500 (27 percent). As of March 1, 2013, providers who did not request preapprovals submitted an estimated 57,000 claims for outpatient therapy services provided during the last quarter of 2012. The results of the MMRs of claims without preapprovals resulted in 19,500 (34 percent) claims affirmed for payment and 37,000 claims (66 percent) not affirmed for payment. These estimates indicate that MMRs of both preapproval requests and claims resulted in a number of nonaffirmed outpatient therapy services during the last quarter of 2012 (see fig. 3). Both CMS officials and MAC staff acknowledged that the MACs were not able to process all the preapprovals submitted in a timely manner. The MACs do not usually conduct preapprovals of services, and the MACs stated that the high volume of preapproval requests outpaced the capacity of the MACs to review them. For example, the MACs we interviewed reported receiving thousands of preapproval requests by mail or fax prior to the start of the MMRs. By mid-October 2012, the MACs estimated they had received 46,000 preapproval requests for outpatient therapy services above the $3,700 threshold. In addition, the MACs rejected about 23 percent of all preapproval requests because they were incomplete. Incomplete requests could be resubmitted. In November 2012, on average more than 24,000 preapproval requests were categorized as having not been reviewed at the end of each of the 4 weeks. Overall, the MACs estimated they completed MMRs for about 52 percent of the total preapproval requests received within the 10 days required by CMS. (See fig. 4.) By the end of December 2012, the MACs had conducted MMRs on about 15,000 claims submitted without preapproval requests. However, the MACs were not under the same time constraints when reviewing claims because, unlike the preapproval requests, CMS guidance permits the MACs 2 months to conduct MMRs after they receive the supporting documentation. In addition, claims for therapy provided during the last quarter of 2012 were submitted incrementally, increasing from about 15,000 at the end of December to almost 57,000 by March 1, 2013. As a result, the MMRs of these claims are staggered over time. CMS officials indicated that the number of claims submitted and beneficiaries affected by these prepayment MMRs would continue to increase in 2013. Although CMS was able to estimate the results of the MMRs conducted, the final outcomes of the 2012 MMRs remain uncertain due to inconsistencies among the MACs in how the data were collected, and errors in the calculation of the number of preapproval requests received and the MMR decisions made. In addition, the time lag for submitting claims and finalizing the appeals process means that the final outcome of the MMR process will not be known for months. CMS officials told us that MACs did the “best they could” and that the final numbers provided in the MMR weekly workload report were obtained outside the MACs’ computerized systems and should be considered approximate or an estimate of the results of the reviews at the time of this report. The manual processes CMS and the MACs used to complete the weekly MMR workload reports resulted in inconsistencies in the data. Both the CMS and MAC officials interviewed acknowledged that human error may have contributed to discrepancies in the reported numbers because the reports were assembled manually. In addition, due to the timing of CMS guidance throughout the MMR, the MACs reported collecting key data elements differently. For example, one MAC included the number of requests rejected in the total number of requests completed while two others did not. CMS officials also reported that they identified gaps or errors in MACs’ weekly workload reports, but the agency did not require the MACs to go back to revise prior weeks’ data. As a result, the running totals included errors from prior weeks and the final numbers do not total correctly. For example, the total number of treatment days that CMS estimates were requested (2.4 million) is significantly greater than the estimated total number of treatment days affirmed plus days nonaffirmed (1.9 million). The combination of potential delays in billing, the prepayment review of claims, and the appeals process also creates uncertainty about the final outcomes of the mandated MMRs associated with outpatient therapy services provided in 2012. Because claims for services provided from October 1, 2012, through December 31, 2012, may be submitted to the MACs as late as December 31, 2013, the total number of claims reviewed will not be known until 2014. In addition, CMS officials, some MAC staff, and outpatient therapy provider association representatives reported the filing of appeals for denials of payment for therapy provided during this period. The appeals process—which may involve five levels of review— could take more than 2 years to reach a conclusion, and any reversals of prior therapy coverage denials will affect the final outcomes of the 2012 MMR process. HHS provided written comments on a draft of this report. HHS highlighted CMS’s 2012 efforts to review the medical records associated with requests for exceptions for outpatient therapy services in excess of the annual $3,700 threshold. The department noted that CMS managed the new workload without additional funding and within a short time frame, and that the MACs shifted staff from other responsibilities to the MMR process. Outpatient therapy manual reviews were extended for 2013 and, according to HHS, CMS streamlined the MMRs of therapy services by transitioning the responsibility for these reviews from the MACs to the agency’s RACs as of April 1, 2013. The RACs are conducting prepayment review of claims at the $3,700 threshold in California, Florida, Illinois, Louisiana, Michigan, Missouri, New York, North Carolina, Ohio, Pennsylvania, and Texas, and are conducting immediate postpayment reviews in all other states. HHS’s comments are printed in appendix I. We are sending copies of this report to the Secretary of Health and Human Services, interested congressional committees, and others. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. In addition to the contact named above, Martin T. Gahart, Assistant Director; George Bogart; Anne Hopewell; and Sara Rudow made key contributions to this report.
In 2011, Medicare paid about $5.7 billion to provide outpatient therapy services for 48 million beneficiaries. Rising Medicare spending for outpatient therapy services--physical therapy, occupational therapy, and speech-language pathology--has long been of concern. Congress established per person spending limits, or "therapy caps," for nonhospital outpatient therapy, which took effect in 1999. In response to concerns that some beneficiaries needing extensive services might be affected adversely, Congress imposed temporary moratoria on the caps several times until 2006, when it required CMS to implement an exceptions process. The Middle Class Tax Relief and Job Creation Act of 2012, in addition to extending the exceptions process, required CMS to conduct MMRs of requests for exceptions for outpatient services provided on or after October 1, 2012, over an annual threshold of $3,700. The act also mandated that GAO report on the implementation of the MMR process. This report describes (1) CMS's implementation of the 2012 MMR process, and (2) the number of individuals and claims subject to MMRs and the outcomes of these reviews. GAO reviewed relevant statutes, CMS policies and guidance, and CMS data on these reviews. GAO also interviewed CMS staff and officials from three MACs that accounted for almost 50 percent of the MMR workload and that processed claims for states previously determined to be at a higher risk for outpatient therapy improper payments. The Centers for Medicare & Medicaid Services (CMS) implemented two types of manual medical reviews (MMR)--reviews of preapproval requests and reviews of claims submitted without preapproval--for all outpatient therapy services that were above a $3,700 per-beneficiary threshold provided during the last 3 months of 2012. However, CMS did not issue complete guidance on how to process preapproval requests before the implementation of the MMR process in October 2012, and the Medicare Administrative Contractors (MAC) that conducted the MMRs were unable to fully automate systems for tracking preapproval requests in the time allotted. CMS required the MACs to manually review preapproval requests within 10 business days of receipt of all supporting documentation to determine whether the services were medically necessary, and to automatically approve any requests they were unable to review within that time frame. CMS officials told GAO that the purpose of the preapproval process was to protect beneficiaries from being liable for payment for nonaffirmed services by giving the provider and beneficiary guidance as to whether Medicare would pay for the requested services. If a provider delivered services without submitting a preapproval request, the MACs were required to manually review submitted claims above the $3,700 threshold prior to payment within 60 days of receiving the needed documentation. The MACs faced particular challenges with implementing reviews of preapproval requests because CMS continued to issue new guidance on how to manage preapproval requests after the MMR process started. For example, CMS did not inform the MACs how to process incomplete requests or count the 10-day preapproval request review time frame until November 7, 2012, and the MACs initially handled requests differently. In addition, all three MACs GAO interviewed told GAO that MMRs of preapproval requests were especially challenging because they did not have time to fully automate systems for tracking and processing the requests before the start of the MMR process, although they adapted their systems to manage the requests in different ways. CMS officials estimated that the MACs reviewed an estimated total of 167,000 preapproval requests and claims for outpatient therapy service above the $3,700 threshold provided from October 1, 2012, through December 31, 2012. Of these reviews, CMS estimated that 110,000 were for preapproval requests and 57,000 were for claims submitted without prior approval. However, due in part to the lack of automation, CMS officials reported that the total number of reviews should be considered estimates of the results of the MMR process at the time of this report. CMS estimated that the MACs affirmed about two-thirds of the preapproval requests and about one-third of the claims submitted without preapproval. Because providers can appeal denials of payment, the final outcome of the MMRs remains uncertain. CMS also estimated that by December 31, 2012, over 115,000 beneficiaries were affected by the reviews in 2012, a number that will rise as more claims subject to review are submitted throughout 2013. In its comments on a draft of this report, HHS emphasized that CMS managed the 2012 MMR process without additional funding and within a short time frame. HHS noted that the MMR process was extended for 2013 and CMS transitioned the responsibility for these reviews to other contractors as of April 1, 2013.
The safety and quality of the U.S. food supply is governed by a highly complex system that is based on more than 30 laws and administered by 12 agencies. In addition, there are over 50 interagency agreements to govern the combined food safety oversight responsibilities of the various agencies. The federal system is supplemented by the states, which have their own statutes, regulations, and agencies for regulating and inspecting the safety and quality of food products. The United States Department of Agriculture (USDA) and the Food and Drug Administration (FDA), within the Department of Health and Human Services (HHS), have most of the regulatory responsibilities for ensuring the safety of the nation’s food supply and account for most federal food safety spending. Under the Federal Meat Inspection Act, the Poultry Products Inspection Act, and the Egg Products Inspection Act, USDA is responsible for the safety of meat, poultry, and certain egg products. FDA, under the Federal Food, Drug, and Cosmetic Act, and the Public Health Service Act, regulates all other foods, including whole (or shell) eggs, seafood, milk, grain products, and fruits and vegetables. Appendix I summarizes the agencies’ responsibilities. Existing statutes give the agencies different regulatory and enforcement authorities. For example, food products under FDA’s jurisdiction may be marketed without the agency’s prior approval. On the other hand, food products under USDA’s jurisdiction must generally be inspected and approved as meeting federal standards before being sold to the public. Although recent legislative changes have strengthened FDA’s enforcement authorities, the division of inspection authorities and other food safety responsibilities has not changed. As we have reported, USDA traditionally had more comprehensive enforcement authority than FDA; however, the Public Health Security and Bioterrorism Preparedness and Response Act of 2002 has granted FDA additional enforcement authorities that are similar to USDA’s. For example, FDA can now require all food processors to register with the agency so that they can be inspected. FDA can also temporarily detain food products when there is credible evidence that the products present a threat of serious adverse health consequences, and FDA can require that entities such as the manufacturers, processors, and receivers of imported foods keep records to allow FDA to identify the immediate previous source and the immediate subsequent recipients of food, including its packaging. This record keeping authority is designed to help FDA track foods in the event of future health emergencies, such as terrorism-related contamination. In addition, FDA now has the authority to require advance notice of imported food shipments under its jurisdiction. Despite the additional enforcement authorities recently granted to FDA, important differences between the agencies’ inspection and enforcement authorities remain. Finally, in addition to their established food safety and quality responsibilities, following the events of September 11, 2001, the federal agencies began to address the potential for deliberate contamination of agriculture and food products. In 2001, by Executive Order, the President added the food industries to the list of critical infrastructure sectors that need protection from possible terrorist attack. As a result of this Executive Order, the Homeland Security Act of 2002 establishing the Department of Homeland Security, and subsequent Presidential Directives, the Department of Homeland Security provides overall direction on how to protect the U.S. food supply from deliberate contamination. The Public Health Security and Bioterrorism Preparedness and Response Act also included numerous provisions to strengthen and enhance food safety and security. As we have stated in numerous reports and testimonies, the fragmented federal food safety system is not the product of strategic design. Rather, it emerged piecemeal, over many decades, typically in response to particular health threats or economic crises. In short, what authorities agencies have to enforce food safety regulations, which agency has jurisdiction to regulate what food products, and how frequently they inspect food facilities is determined by the legislation that governs each agency, or by administrative agreement between the two agencies, without strategic design as to how to best protect public health. It is important to understand that the origin of this problem is historical and, for the most part, grounded in the federal laws governing food safety. We and other organizations, including the National Academies, have issued many reports detailing problems with the federal food safety system and have made numerous recommendations for change. While many of these recommendations have been acted upon, problems in the food safety system persist, largely because food safety responsibilities are still divided among agencies that continue to operate under different laws and regulations. As a result there is fragmentation, inconsistency, and overlap in the federal food safety system. These problems are manifested in numerous ways as discussed below. Federal agencies have overlapping oversight responsibilities. Agency jurisdictions either assigned by law over time or determined by agency agreements result in overlapping oversight of single food products. For example, which agency is responsible for ensuring the safety of frozen pizzas depends on whether or not pepperoni is used as a topping. Figure 1 shows the agencies involved in regulating the safety of frozen pizza. In other instances, such as canned soups, it is the amount of a particular ingredient contained in the food product that governs whether it is subject to FDA or USDA inspection. As a result, canned soup producers are also subject to overlapping jurisdiction by the two food safety agencies. Overlap and duplication result in inefficient use of inspection resources. Food processing establishments may be inspected by more than one federal agency because they process foods that are regulated under different federal laws or because they participate in voluntary inspection programs. As of February 2004, FDA’s records show that there are about 2,000 food processing facilities in the United States that may handle foods regulated by both FDA and USDA because their products include a variety of ingredients. Multi-ingredient products that are regulated by both FDA and USDA include pizza, canned soups, and sandwiches. GAO found that 514 of the 8,653 FDA inspections conducted in six states between October 1987 and March 1991, duplicated those of other federal agencies. For example, FSIS had five inspectors assigned full time to a plant that processed soups containing meat or poultry, yet FDA inspected the same plant because it also processed soups that did not contain meat or poultry. Thus, rather than having the full-time inspectors assigned to the plant conduct inspections for all the plant’s products, additional inspectors from another agency were required to conduct separate inspections of products as a result of the different ingredients contained in the product. Moreover, there is also inefficient use of federal inspection resources dedicated to overseeing the safety of seafood products. FDA has responsibility for ensuring the safety of domestic and imported seafood products. However, as we reported in January 2004, the NOAA Seafood Inspection Program also provides fee-for-service safety, sanitation, and/or product inspections for approximately 2,500 foreign and domestic firms annually. Thus, both FDA and NOAA’s programs duplicate inspections of seafood firms. To make more efficient use of federal inspection resources, we have recommended that FDA work toward developing a memorandum of understanding that leverages NOAA’s Seafood Inspection Program resources to augment FDA’s inspection capabilities. Federal agencies’ different authorities result in inconsistent inspection and enforcement. Despite the additional enforcement authorities granted to FDA by the Public Health Security and Bioterrorism Preparedness and Response Act of 2002, differences between the agencies’ inspection and enforcement authorities remain. For example, when FSIS inspectors observe serious noncompliance with USDA’s food safety regulations, they have the authority to immediately withdraw their inspection services. This effectively stops plant operations because a USDA inspector must be present and food products under USDA’s jurisdiction generally must be inspected and approved as meeting federal standards before being sold to the public. This ensures more timely correction of problems that could affect the safety of meat and poultry products. In contrast, food products under FDA’s jurisdiction may be marketed without the agency’s prior approval. Thus, while FDA may temporarily detain food products when there is credible evidence that the products present a threat of serious adverse health consequences, FDA currently has no authority comparable with USDA’s allowing it to stop plant operations. As a result, problems identified during FDA inspections may take longer to correct. Federal agencies’ different authorities to oversee imported foods also result in inconsistent efforts to ensure safety. A significant amount of the food we consume is imported; yet, as we have testified in the past, the same fragmented structure and inconsistent regulatory approach is being used to ensure the safety of imported foods. For example, more than three-quarters of the seafood Americans consume is imported from an estimated 13,000 foreign suppliers in about 160 different countries. As we have reported, however, FDA’s system for ensuring the safety of imported seafood does not sufficiently protect consumers. For example, the agency inspected about 100 of roughly 13,000 foreign firms in 2002 and tested slightly over 1 percent of imported seafood products. In January 2004, we reported that despite some improvements, FDA is still able to inspect only a small proportion of U.S. seafood importers and visit few seafood firms overseas yearly. As we have previously recommended, a better alternative would be to strengthen FDA’s ability to ensure the safety of imported foods by requiring that all food eligible for importation to the United States be produced under equivalent food safety systems. USDA has such authority. In fact, USDA is legally required to review certifications made by other countries that their meat and poultry food safety systems ensure compliance with U.S. standards and USDA must also conduct on-site inspections before those products can be exported to the United States. At this time, 37 countries are approved to export meat and poultry products to the United States. Frequency of inspections is not based on risk. Under current law, USDA inspectors maintain continuous inspection at slaughter facilities and examine each slaughtered meat and poultry carcass. They also visit each processing plant at least once during each operating day. For foods under FDA jurisdiction, however, federal law does not mandate the frequency of inspections. The differences in inspection frequencies are, at times, quite arbitrary, as in the case of jointly regulated food products. For example, as we testified in 2001, federal responsibilities for regulating the production and processing of a packaged ham and cheese sandwich depends on whether the sandwich is made with one or two slices of bread, not on the risk associated with its ingredients. As a result, facilities that produce closed-faced sandwiches are inspected on average once every 5 years by FDA, whereas facilities that produce open-faced sandwiches are inspected daily by FSIS. Federal expenditures are not based on the volume of foods regulated, consumed, or their risk of foodborne illness. FDA and FSIS food safety efforts are based on the respective legislation governing their operation. As a result, expenditures for food safety activities are disproportionate to the amount of food products each agency regulates and to the level of public consumption of those food products. FDA is responsible for ensuring the safety of approximately 79 percent of the foods Americans consume annually, while its budget represented only 40 percent ($508 million) of the approximately $1.3 billion spent on food safety oversight during fiscal year 2003. In contrast, FSIS inspects approximately 21 percent of the foods Americans consume annually, while its food safety budget represented 60 percent ($756 million) of the federal expenditures for food safety in 2003. Figure 2 shows the imbalance between the dollar amounts that the agencies spend on food safety activities and the volume of foods Americans consume annually. Perhaps more importantly, the agencies’ food safety expenditures are disproportionate to the percentage of foodborne illnesses linked to the food products they regulate. For example, according to foodborne illness data compiled by the CDC, USDA-regulated foods account for about 32 percent of reported foodborne outbreaks with known sources. Conversely, FDA-regulated foods account for about 68 percent of these outbreaks. (See fig. 3.) Yet, USDA’s food safety expenditures are about 49 percent more than FDA’s. Finally, as figure 4 shows, FSIS has 9,170 employees that are, by law, responsible for daily oversight of approximately 6,464 meat, poultry, and egg product plants. FDA has roughly 1,900 food inspection employees who, among other things, inspect about 57,000 food establishments. Overlaps in egg safety responsibility compromise safety. Overlapping responsibilities have resulted in extensive delays in the development of a comprehensive regulatory strategy to ensure egg safety. As we have reported, no single federal agency has overall responsibility for the policies and activities needed to ensure the safety and quality of eggs and egg products. Figure 5 shows the overlapping responsibilities of multiple agencies involved in overseeing the production, processing, and transportation of eggs and egg products. As shown in figure 5, FDA has the primary responsibility for the safe production and processing of eggs still in the shell (known by industry as shell eggs), whereas FSIS has the responsibility for food safety at the processing plants where eggs are broken to create egg products. Despite FSIS and FDA attempts to coordinate their efforts on egg safety, more than 10 years have passed since the problem of bacterial contamination of intact shell eggs was first identified, and a comprehensive safety strategy has yet to be implemented. Agency representatives serving on the President’s Council on Food Safety developed an Egg Safety Action Plan in 2000 and identified egg safety as one component of food safety that warranted immediate federal, interagency action. As of March 2004, comprehensive regulations to implement the actions the agencies identified in the Action Plan have not been published. Claims of health benefits for foods may be treated inconsistently by different federal agencies. Overlaps also exist in the area of health benefit claims associated with certain foods and dietary supplements. FDA, USDA, and the Federal Trade Commission (FTC) share responsibility for determining what types of health benefit claims are allowed on product labels and in advertisements. The varying statutory requirements among the agencies can lead to inconsistencies in labeling and advertisements. As a result, the use of certain health benefit claims on a product might be denied by one agency but allowed by another. For example, the FTC may allow a health claim in an advertisement as long as it meets the requirements of the Federal Trade Commission Act, even if FDA has not approved it for use on a label. Similarly, USDA reviews requests to use health claims on a case-by-case basis, regardless of whether or not FDA has approved them. Thus, consumers face a confusing array of claims, which may lead them to make inappropriate dietary choices. Multiple agencies must respond when serious food safety challenges emerge. Inconsistent food safety authorities result in the need for multiple agencies to respond to emerging food safety challenges. This was illustrated recently with regard to ensuring that animal feed is free of diseases, such as bovine spongiform encephalopathy (BSE), or mad cow disease. A fatal human variant of the disease is linked to eating beef from cattle infected with BSE. As we reported in 2002, four federal agencies are responsible for overseeing the many imported and domestic products that pose a risk of BSE. One, the U.S. Customs and Border Protection, screens all goods entering the United States to enforce its laws and the laws of 40 other agencies. The second, USDA’s Animal and Plant Health Inspection Service (APHIS), protects livestock from animal diseases by monitoring the health of domestic and imported livestock. The third, USDA’s FSIS, monitors the safety of imported and domestically produced meat and, at slaughterhouses, tests animals prior to slaughter to determine if they are free of disease and safe for human consumption. Finally, FDA monitors the safety of animal feed—animals contract BSE through feed that contains protein derived from the remains of diseased animals. During the recent discovery of an infected cow in Washington state, FDA investigated facilities that might have handled byproducts from the infected animal to make animal feed. Figure 6 illustrates the fragmentation in the agencies’ authorities. When we issued our report in 2002, BSE had not been found in U.S. cattle. However, we found a number of weaknesses in import controls. Because of those weaknesses and the disease’s long incubation period—up to 8 years—we concluded that BSE might be silently incubating somewhere in the United States. Then, in May 2003, an infected cow was found in Canada, and in December 2003, another was found in the state of Washington. USDA’s Animal and Plant Health Inspection Service operates the surveillance program that found the infected U.S. cow, while FDA must ensure that the disease cannot spread by enforcing an animal feed ban that prohibits the use of cattle brains and spinal tissue, among other things, in cattle feed. With regard to the meat from the BSE-infected animal found in Washington state, FSIS conducted a recall of meat distributed in markets in six states. Both USDA and FDA have reported that meat from the cow was not used in FDA-regulated foods. However, had the meat been used, for example, in canned soups that contained less than 2 percent meat, FDA—not FSIS—would have been responsible for working with companies to recall those foods. (As app. II shows, the agencies’ oversight responsibilities for food products vary depending on the amount of beef or poultry content.) Neither FDA nor USDA has authority under existing food safety laws to require a company to recall food products. Both agencies work informally with companies to encourage them to initiate a recall, but our ongoing work shows that each agency has different approaches and procedures. This can be confusing to food processors involved in a recall. Overlapping responsibilities in responding to mad cow disease highlight the challenges that government and industry face when responding to the need to remove contaminated food products from the market. As part of work currently underway, we are looking at USDA and FDA food recalls—including USDA’s oversight of the BSE-related recall and FDA’s oversight of the feed ban. We are also monitoring both USDA’s and FDA’s BSE-response activities. There are undoubtedly other federal food safety activities where overlap and duplication may occur. For example, in the areas of food safety research, public outreach, or both FDA, and USDA’s Economic Research Service, FSIS and the Cooperative State Research, Education and Extension Service have all received funding to develop food safety-related educational materials for the public. In addition, responsibility for regulating genetically modified foods is shared among FDA, USDA, and the Environmental Protection Agency (EPA). However, we have not yet examined the extent to which these and other areas of overlap and duplication impact the efficiency of the food safety system. The fragmented legal and organizational structures of the federal food safety system are now further challenged by the realization that American farms and food are vulnerable to potential attack and deliberate contamination. As we recently reported in a statement for the record before the Senate Committee on Governmental Affairs, bioterrorist attacks could be directed at many different targets in the farm-to-table continuum, including crops, livestock, food products in the processing and distribution chain, wholesale and retail facilities, storage facilities, transportation, and food and agriculture research laboratories. Experts believe that terrorists would attack livestock and crops if their primary intent were to cause severe economic dislocation. Terrorists could decide to contaminate finished food products if their motive were to harm humans. Both FDA and USDA have taken steps to protect the food supply against a terrorist attack, but it is, for the most part, the current food safety system that the nation must depend on to prevent and respond to bioterrorist acts against our food supply. For example, in February 2003, we reported that FDA and USDA determined that their existing statutes empower them to enforce food safety, but do not provide them with clear authority to regulate all aspects of security at food-processing facilities. Neither agency feels that it has authority to require processors to adopt physical facility security measures such as installing fences, alarms, or outside lighting. Each agency, independently of one another, developed and published guidelines that food processors may voluntarily adopt to help them identify security measures and mitigate the risk of deliberate contamination at their production facilities. However, while food inspectors were instructed to be vigilant, they have not been asked to enforce, monitor, or document their actions regarding the extent to which security measures are being adopted. As a result, neither FDA nor USDA can fully assess the extent to which food processors are following the security guidelines that the agencies developed. Officials note, however, that they have taken many steps to address deliberate food contamination. Both agencies have distributed food security information to food processors under their jurisdictions and are cochairing the Food Emergency Response Network, which integrates the nation’s laboratory infrastructure for the detection of threat agents in food at the local, state, and federal levels. Among other things, USDA established the Office of Food Security and Emergency Preparedness, enhanced security at food safety laboratories, and trained employees in preparedness activities. Similarly, FDA revised emergency response plans and conducted training for all staff, as well as participated in various emergency response exercises at FDA’s Center for Food Safety and Applied Nutrition. Another GAO report documented vulnerabilities in federal efforts to prevent dangerous animal diseases from entering the United States. Our 2002 report on foot-and-mouth disease concluded that because of the sheer magnitude of international passengers and cargo that enters this country daily, completely preventing the entry of foot-and-mouth disease may not be feasible. During the 2001 outbreak of food-and-mouth disease in Europe, poor communication between USDA and Customs officials caused delays in carrying out inspections of international passengers and cargo arriving from disease-affected countries. To address the problems I have just outlined, a fundamental transformation of the current food safety system is necessary. As the Comptroller General has testified, there are no easy answers to the challenges federal departments and agencies face in transforming themselves. Changes, such as revamping the U.S. food safety system, will require a process that involves key congressional stakeholders and administration officials as well as others, ranging from food processors to consumers. There are different opinions about the best organizational model for food safety, but there is widespread national and international recognition of the need for uniform laws and the consolidation of food safety activities. Establishing a single food safety agency responsible for administering a uniform set of laws would offer the most logical approach to resolving long-standing problems with the current system, addressing emerging threats to food safety, and ensuring a safer food supply. This would ensure that food safety issues are addressed comprehensively by better preventing contamination throughout the entire food cycle—from the production and transportation of foods through their processing and sale until their eventual consumption by consumers. In our view, integrating the overlapping and duplicative responsibilities for food safety into a single agency or department can create synergy and economies of scale that would provide for more focused and efficient efforts to protect the nation’s food supply. A second option would be to consolidate all food safety inspection activities, but not other activities, under an existing department, such as USDA or HHS. Other measures have not proven successful. For example, the Farm Security and Rural Investment Act of 2002 mandated the creation of a 15-member Food Safety Commission charged with making specific recommendations to improve the U.S. food safety system and delivering a report to the President and the Congress within a year. The Congress has thus far not provided funding for the commission. Simply choosing an organizational structure will not be sufficient, however. For the nation’s food safety system to be successful, it will also be necessary to reform the current patchwork of food safety legislation and make it uniform, consistent, and risk-based. As table 1 shows, five of eight former senior food safety officials with whom we discussed the matter in preparation for this testimony concur with this view. Three officials had different views on the best approach to address problems with the current food safety system. Joseph Levitt, director of the FDA’s Center for Food Safety and Applied Nutrition from 1998 to 2003, recommends that the existing agencies be fully funded. Thomas Billy, administrator of USDA’s FSIS from 1996 to 2001 and director of FDA’s Office of Seafood between 1990 and 1994, believes that no changes should take place until a presidential commission evaluates the problems, identifies the alternatives, and recommends a specific approach and strategy for consolidating food safety programs. However, Mr. Billy supports incremental legislative steps to fix current shortcomings. Finally, Caren Wilcox, USDA’s deputy under secretary for Food Safety from 1997 to 2001, believes that creating a single food safety agency would be advisable, but only under certain circumstances. In 1998, the National Academies similarly recommended modifying the federal statutory framework for food safety to avoid fragmentation and to enable the creation and enforcement of risk-based standards. Moreover, our 1999 report on the experiences of countries that were then consolidating their food safety systems indicated that foreign officials are expecting long-term benefits in terms of savings and food safety. Five countries—Canada, Denmark, Great Britain, Ireland, and New Zealand— have each consolidated their food safety responsibilities under a single agency. For example, New Zealand’s Food Safety Authority was created in July 2002 to reduce inconsistencies and lack of coordination in food safety management by two separate agencies—the Ministry of Health and the Ministry of Agriculture and Forestry. The new authority anticipates an effective use of scarce resources and a reduction in duplication of effort. In conclusion, given the risks posed by new threats to the food supply, be they inadvertent or deliberate, we can no longer afford inefficient, inconsistent, and overlapping programs and operations in the food safety system. It is time to ask whether a system that developed in a piecemeal fashion in response to specific problems as they arose over the course of several decades can efficiently and effectively respond to today’s challenges. We believe that creating a single food safety agency to administer a uniform, risk-based inspection system is the most effective way for the federal government to resolve long-standing problems, address emerging food safety issues, and better ensure the safety of the nation’s food supply. This integration can create synergy and economies of scale, and provide more focused and efficient efforts to protect the nation’s food supply. The National Academies and the President’s Council on Food Safety have reported that comprehensive, uniform, and risk-based food safety legislation is needed to provide the foundation for a consolidated food safety system. We recognize that consolidating federal responsibilities for food safety into a single agency or department is a complex process. Numerous details, of course, would have to be worked out. However, it is essential that the fundamental decision to create more uniform standards and a single food safety agency to uphold them is made and the process for resolving outstanding technical issues is initiated. To provide more efficient, consistent, and effective federal oversight of the nation’s food supply, we suggest that the Congress consider enacting comprehensive, uniform, and risk-based food safety legislation establishing a single, independent food safety agency at the Cabinet level. If the Congress does not opt for an entire reorganization of the food safety system, we suggest that as an alternative interim option it consider modifying existing laws to designate one current agency as the lead agency for all food safety inspection matters. Madam Chairwoman, this completes my prepared statement. I would be pleased to respond to any questions that you or other Members of the Committee may have at this time. For further information about this testimony, please contact Lawrence J. Dyckman, Director, Natural Resources and Environment, (202) 512-3841. Maria Cristina Gobin, Katheryn Summers Hubbell, Kelli Ann Walther, Amy Webbink, and John Delicath made key contributions to this statement. Food and Drug Administration (FDA) Centers for Disease Control and Prevention (CDC) Animal and Plant Health Inspection Service (APHIS) Agricultural Marketing Service (AMS) Agricultural Research Service (ARS) National Oceanic and Atmospheric Administration (NOAA) Collecting revenues and enforcing various Customs laws. Beans with bacon (2 percent or more bacon) Pork and beans (no limit on amount of pork) Food Safety: FDA’s Imported Seafood Safety Program Shows Some Progress, but Further Improvements Are Needed. GAO-04-246. Washington, D.C.: January 30, 2004. Bioterrorism: A Threat to Agriculture and the Food Supply. GAO-04-259T. Washington, D.C.: November 19, 2003. Combating Bioterrorism: Actions Needed to Improve Security at Plum Island Animal Disease Center. GAO-03-847. Washington, D.C.: September 19, 2003. Results-Oriented Government: Shaping the Government to Meet 21st Century Challenges.GAO-03-1168T. Washington, D.C.: September 17, 2003. School Meal Programs: Few Instances of Foodborne Outbreaks Reported, but Opportunities Exist to Enhance Outbreak Data and Food Safety Practices. GAO-03-530. Washington, D.C.: May 9, 2003. Agricultural Conservation: Survey Results on USDA’s Implementation of Food Security Act Compliance Provisions. GAO-03-492SP. Washington, D.C.: April 21, 2003. Food-Processing Security: Voluntary Efforts Are Under Way, but Federal Agencies Cannot Fully Assess Their Implementation. GAO-03-342. Washington, D.C.: February 14, 2003. Meat and Poultry: Better USDA Oversight and Enforcement of Safety Rules Needed to Reduce Risk of Foodborne Illnesses. GAO-02-902. Washington, D.C.: August 30, 2002. Foot and Mouth Disease: To Protect U.S. Livestock, USDA Must Remain Vigilant and Resolve Outstanding Issues. GAO-02-808. Washington, D.C.: July 26, 2002. Genetically Modified Foods: Experts View Regimen of Safety Tests as Adequate, but FDA’s Evaluation Process Could Be Enhanced. GAO-02-566. Washington, D.C.: May 23, 2002. Food Safety: Continued Vigilance Needed to Ensure Safety of School Meals.GAO-02-669T. Washington, D.C.: April 30, 2002. Mad Cow Disease: Improvements in the Animal Feed Ban and Other Regulatory Areas Would Strengthen U.S. Prevention Efforts. GAO-02-183. Washington, D.C.: January 25, 2002. Food Safety: Weaknesses in Meat and Poultry Inspection Pilot Should Be Addressed Before Implementation. GAO-02-59. Washington, D.C.: December 17, 2001. Food Safety and Security: Fundamental Changes Needed to Ensure Safe Food.GAO-02-47T. Washington, D.C.: October 10, 2001. Food Safety: CDC Is Working to Address Limitations in Several of Its Foodborne Disease Surveillance Systems. GAO-01-973. Washington, D.C.: September 7, 2001. Food Safety: Overview of Federal and State Expenditures. GAO-01-177. Washington, D.C.: February 20, 2001. Food Safety: Federal Oversight of Seafood Does Not Sufficiently Protect Consumers. GAO-01-204. Washington, D.C.: January 31, 2001. Food Safety: Actions Needed by USDA and FDA to Ensure That Companies Promptly Carry Out Recalls. GAO/RCED-00-195. Washington, D.C.: August 17, 2000. Food Safety: Improvements Needed in Overseeing the Safety of Dietary Supplements and “Functional Foods.” GAO/RCED-00-156. Washington, D.C.: July 11, 2000. School Meal Programs: Few Outbreaks of Foodborne Illness Reported. GAO/RCED-00-53. Washington, D.C.: February 22, 2000. Meat and Poultry: Improved Oversight and Training Will Strengthen New Food Safety System. GAO/RCED-00-16. Washington, D.C.: December 8, 1999. Food Safety: Agencies Should Further Test Plans for Responding to Deliberate Contamination. GAO/RCED-00-3. Washington, D.C.: October 27, 1999. Food Safety: U.S. Needs a Single Agency to Administer a Unified, Risk- Based Inspection System. GAO/T-RCED-99-256. Washington, D.C.: August 4, 1999. Food Safety: U.S. Lacks a Consistent Farm-to-Table Approach to Egg Safety. GAO/RCED-99-184. Washington, D.C.: July 1, 1999. Food Safety: Experiences of Four Countries in Consolidating Their Food Safety Systems. GAO/RCED-99-80. Washington, D.C.: April 20, 1999. Food Safety: Opportunities to Redirect Federal Resources and Funds Can Enhance Effectiveness. GAO/RCED-98-224. Washington, D.C.: August 6, 1998. Food Safety: Federal Efforts to Ensure Imported Food Safety Are Inconsistent and Unreliable. GAO/T-RCED-98-191. Washington, D.C.: May 14, 1998. Food Safety: Federal Efforts to Ensure the Safety of Imported Foods Are Inconsistent and Unreliable. GAO/RCED-98-103. Washington, D.C.: April 30, 1998. Food Safety: Agencies’ Handling of a Dioxin Incident Caused Hardships for Some Producers and Processors. GAO/RCED-98-104. Washington, D.C.: April 10, 1998. Food Safety: Fundamental Changes Needed to Improve Food Safety. GAO/RCED-97-249R. Washington, D.C.: September 9, 1997. Food Safety: Information on Foodborne Illnesses. GAO/RCED-96-96. Washington, D.C.: May 8, 1996. Food Safety: Changes Needed to Minimize Unsafe Chemicals in Food. GAO/RCED-94-192. Washington, D.C.: September 26, 1994. Food Safety: A Unified, Risk-Based Food Safety System Needed. GAO/T- RCED-94-223. Washington, D.C.: May 25, 1994. Food Safety: Risk-Based Inspections and Microbial Monitoring Needed for Meat and Poultry. GAO/RCED-94-110. Washington, D.C.: May 19, 1994. Food Safety and Quality: Uniform, Risk-Based Inspection System Needed to Ensure Safe Food Supply. GAO/RCED-92-152. Washington, D.C.: June 26, 1992. Food Safety and Quality: Salmonella Control Efforts Show Need for More Coordination. GAO/RCED-92-69. Washington, D.C.: April 21, 1992. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The safety of the U.S. food supply is governed by a highly complex system of more than 30 laws administered by 12 agencies. In light of the recent focus on government reorganization, it is time to ask whether the current system can effectively and efficiently respond to today's challenges. At the request of the Subcommittee on Civil Service and Agency Organization, we reviewed and summarized our work on the safety and security of the food supply regarding (1) the fragmented legal and organizational structure of the federal food safety system, (2) the consequences of overlapping and inconsistent inspection and enforcement, and (3) options for consolidating food safety functions. As we have stated in numerous reports and testimonies, the federal food safety system is not the product of strategic design. Rather, it emerged piecemeal, over many decades, typically in response to particular health threats or economic crises. The result is a fragmented legal and organizational structure that gives responsibility for specific food commodities to different agencies and provides them with significantly different authorities and responsibilities. The existing food safety statutes create fragmented jurisdictions between the two principal food safety agencies, the Food and Drug Administration (FDA) and the U.S. Department of Agriculture (USDA). As a result, there are inconsistencies in the frequency of the agencies' inspections of food facilities and the enforcement authorities available to these agencies. In short, which agency has jurisdiction to regulate various food products, the regulatory authorities they have available to them, and how frequently they inspect food facilities is determined by disparate statutes or by administrative agreement between the two agencies, without strategic design as to how to best protect public health. In many instances, food processing facilities are inspected by both FDA and USDA. Furthermore, federal food safety efforts are based on statutory requirements, not risk. For example, funding for USDA and FDA is not proportionate to the amount of food products each agency regulates, to the level of public consumption of those foods, or to the frequency of foodborne illnesses associated with food products. A federal food safety system with diffused and overlapping lines of authority and responsibility cannot effectively and efficiently accomplish its mission and meet new food safety challenges. These challenges are more pressing today as we face emerging threats such as mad cow disease and the potential for deliberate contamination of our food supply through bioterrorism. Therefore, fundamental changes are needed. First, there is a need to overhaul existing food safety legislation to make it uniform, consistent, and risk based. Second, consolidation of food safety agencies under a single independent agency or a single department is needed to improve the effectiveness and efficiency of the current federal food safety system. Integrating the overlapping responsibilities for food safety into a single agency or department can create synergy and economies of scale, as well as provide more focused and efficient efforts to protect the nation's food supply.
NRC is an independent federal agency that (1) establishes standards and regulations for commercial nuclear power plants and non-power research, test, and training reactors; fuel cycle facilities; medical, academic, and industrial uses of nuclear materials; and the transport, storage, and disposal of nuclear materials and wastes, (2) issues licenses for nuclear facilities and uses of nuclear materials, such as industrial applications, nuclear medicine, academic activities, and research work, and (3) inspects facilities and the uses of nuclear materials to ensure compliance with regulatory requirements. While safety is a paramount goal, a reassessment in 2001 added three subordinate performance goals to NRC’s strategic plan: (1) to make NRC activities and decisions more effective, efficient, and realistic, (2) to reduce unnecessary regulatory burden on industry without affecting safety, and (3) to increase public confidence in NRC actions. Figure 1 shows NRC’s organization. NRC is governed by a five-member commission with one member designated by the President to serve as Chairman. The Chairman serves as the principal executive officer and official spokesperson of the commission. Reporting to the Commission Chairman is the Executive Director for Operations (EDO). The EDO is the chief operational and administrative officer of NRC, and is generally responsible for executing the program policies and decisions made by the NRC. Also reporting to the Commission Chairman is the Chief Financial Officer (CFO), who is responsible for the agency’s PBPM and all of NRC’s financial management activities. NRC is organized into seven program offices under the EDO. The Office of Nuclear Reactor Regulation (NRR), the Office of Nuclear Material Safety and Safeguards (NMSS), the Office of Nuclear Regulatory Research (RES), and the newly created Office of Nuclear Security and Incident Response (NSIR) are NRC’s four largest offices. It also has three smaller program offices, various other management and mission support offices, and four regional offices. While strategic planning, budgeting, and program implementation involve headquarters offices and regional operations, we focused our work on those offices that NRC officials said had more experience in PBPM implementation. The Office of the CFO which includes the Division of Planning, Budget, and Analysis, is responsible for NRC’s financial management and reporting under GPRA. NRR licenses and inspects nuclear power reactors and non-power reactors. NMSS directs and oversees licensing, inspection, and environmental activities for nuclear fuel cycle facilities and safeguards nuclear materials, including the management and disposal of high- and low-level radioactive wastes. RES provides technical support to the frontline regulatory activities involving licensing and inspection, oversight and development of regulatory products. NSIR combines NMSS responsibilities for protection of fuel cycle facilities and materials with NRR responsibilities for physical security at nuclear power plants and other facilities. The four regions execute NRC policies and various programs relating to inspection, licensing, enforcement, investigation, governmental liaison, as well as emergency response within their regional boundaries. NRC employed approximately 2,900 people and had a total budget of approximately $559 million in fiscal year 2002. Of that amount, the Congress transferred about $23.7 million from the Nuclear Waste Fund. The remainder was to be financed by a mix of revenues from licensing, inspection services, and other services and collections, and amounts from the general fund of the Treasury. These amounts were made available in NRC’s annual appropriations and in an emergency supplemental appropriation to support homeland-security-related activities. Over half of NRC’s annual budget is used to pay staff salaries and benefits. The remaining funds are used to support other operating expenses, purchase technical assistance for regulatory programs, and conduct safety research. During the 1990s, various concerns were raised about NRC’s performance, particularly the way NRC conducted inspections and promulgated regulations. Agency officials told us that NRC’s former Commission Chairman, Shirley Jackson, was concerned that NRC’s practices were narrowly focused on ensuring that its activities and processes were consistent with regulatory law without adequate attention to the results of its activities. Both the nuclear industry and public interest groups criticized NRC’s plant assessment and enforcement processes as lacking objectivity, consistency, and predictability. An NRC report also described its former regulatory approach as punitive and reactive. According to a senior agency official, the agency was concerned that the Congress would cut about one-third of the agency’s staff from the NRC budget for fiscal year 1999 unless the agency changed the way it conducted business. NRC took various steps to improve regulatory oversight and agency management. These changes included a comprehensive strategic planning effort from 1995 to 1997 to reassess and establish new baselines for its programs, led by then-Chairman Jackson. NRC also charged the OCFO and the former Executive Council with developing a new planning, budgeting, and performance management process. NRC staff said that PBPM changes also supported the agency’s efforts to implement GPRA. NRC established PBPM in the fall of 1997 and implemented a pilot project in NRR. In 1999, NRC extended PBPM to NMSS and RES for the fiscal year 2000 budget. NRC plans to further develop PBPM to include more detailed procedures, the products involved, and the roles of various management levels. To achieve our objectives, we interviewed selected NRC staff members from the offices of the EDO, the CFO, and the Chief Information Officer; from three headquarters offices in Rockville, Maryland (NRR, NMSS, and RES); and from the Region II (Atlanta) office for their perspectives on PBPM and how it supports resource decisions. The Region II office was selected because, according to NRC officials, this region had been instrumental in developing a cohesive operating plan—one of the PBPM techniques used by NRC to enhance coordination among program offices and regions. Within these organizations, we interviewed officials at various levels of management involved in the budget decision-making process, including office directors, division directors, and unit managers. In total, we interviewed more than 30 NRC officials on the various aspects of planning and budgeting practices. We reviewed NRC’s planning, budget, and program documents, including strategic plans, annual performance plans, budget requests, operating plans, and performance reports, that support PBPM. This report presents NRC’s budget and planning practices as described by the NRC officials we interviewed and described in the NRC documents we reviewed. The views of those individuals and the information in these documents, which we have summarized for reporting purposes, may not necessarily be generalized across NRC. We also did not observe or evaluate the processes in operation, nor did we assess the program or financial information contained in documents provided by NRC. We also did not evaluate the completeness or accuracy of NRC performance goals and measures or the effectiveness of NRC rule making, licensing, inspection, and oversight programs. Our work was conducted from February through May of 2002 in accordance with generally accepted government auditing standards. Implementation of PBPM is a work in progress. PBPM was created by NRC to improve program and service performance by integrating NRC’s strategic planning and budgeting processes. This section describes how components of the process were designed to operate, while the next section (“Planning and Performance Information Influences Resource Allocation Decisions in Various Ways”) explains how performance information informs resource decisions in those offices that have implemented PBPM and its techniques. NRC has gradually introduced PBPM techniques across the agency and has allowed offices some flexibility during implementation of the process. NRC began implementation in its larger program and mission support offices. As NRC has gained experience, it is examining ways to extend the process to the smaller program and mission support offices and to more fully standardize PBPM techniques across the agency. NRC designed PBPM as an integrated process that functions most effectively when information from one component is used to inform decisions in other components. Figure 2 shows how the four components interact over a budget cycle. For example, the strategic direction setting in Component 1 relies in part on the assessment elements in Component 4. The effectiveness review element in Component 2 relies on performance goals developed during strategic direction setting. Finally, the assessment elements in Component 4 incorporate information gathered from Component 3, performance monitoring, to identify topics for program evaluations and self-assessments. In Component 1, NRC establishes agencywide strategic direction by formulating the strategic plan and by issuing Commission guidance throughout the year. The plan includes NRC’s strategic and performance goals and corresponding measures and identifies general strategies on how best to achieve the agency’s mission. The plan is developed with Commission and stakeholder involvement by a senior management group with a broad perspective of the agency, and is approved by the Commission. Although the plan covers 5 years and is reexamined every 3 years as required by GPRA, if circumstances warrant, the plan can be changed more often. The plan also establishes a framework called “strategic arenas,” each of which is composed of related programs with a common purpose. NRC’s strategic arenas correspond to program activities in the President’s budget. In addition, the Commission provides direction to its managers on programs and operations through various written directives. In Component 2, managers in offices using PBPM employ a set of interrelated tools to translate agency goals and strategies into individual office work activities, performance targets, and resource needs. To determine how work activities contribute to achieving NRC’s four performance goals, individual offices conduct what are called effectiveness reviews. These reviews are not comprehensive assessments of programs but rather a structured way for managers to evaluate the contribution of work activities to achieving performance goals prior to budget formulation. For example, an office will examine each of its work activities and ask how a given activity achieves each of the performance goals. Effectiveness reviews also assist offices in identifying where there are gaps in activities or where new initiatives are needed. Agency officials said that offices that conduct these reviews have used various methodologies to rank office activities relative to agency performance goals. According to agency officials, if an office determines through an effectiveness review that activities are not critical to achieving NRC performance goals, the office will likely propose reducing or eliminating resources for the activity in the upcoming budget year. Effectiveness review discussions may begin prior to the start of the annual budget process, concurrent with Component 1 activities establishing strategic direction. These discussions enable senior management to provide guidance on expectations for work priorities (targets). The budget assumptions document is a tool used to plan work activities based on workload and set performance targets. This document identifies external and internal factors, such as anticipated number of license reviews that will affect the agency’s workload over the next 2 fiscal years. These assumptions are developed by the offices and approved by NRC executive-level managers. These assumptions then become key inputs for offices when formulating their resource needs for the upcoming budget year. Each budget assumption is supported by a summary of the factors that were evaluated to produce the assumption and to indicate the likelihood that this assumption will materialize. For example, the fiscal year 2003- 2004 budget assumptions document estimates approximately 1,500 enforcement actions for each year. This estimate is based on historical trends and anticipated results from implementation of the revised reactor oversight process. In addition, the budget assumptions document includes related information that may affect the assumptions. In the above example, NRC is attempting to integrate Alternative Dispute Resolution techniques into the enforcement program, a decision that may require additional resources to implement. Finally, through its annual budget call NRC provides instructions to individual offices for developing office budget priorities. Individual offices submit budgets to the NRC executive level by program. These submissions address resources needed by each office to accomplish NRC strategic and performance goals. A group of senior managers then reviews office budget submissions by strategic arena and submits the proposed office budget to the CFO and EDO. The CFO and EDO then submit their proposed budget to the Chairman for Commission approval. After Commission approval, NRC submits a combined annual budget and performance plan to OMB for inclusion in the President’s budget. The combined budget and performance plan also serves as the agency’s budget justification to the Congress. Figure 3 shows how NRC’s performance plan links program activities and funding allocations by goal. In Component 3, NRC executes the approved budget through office operating plans based on appropriations, congressional guidance, and Commission priorities. Each office prepares operating plans to reflect the allocation of staff years and funds available following appropriations action and OMB apportionment. The operating plans, tailored by each office implementing PBPM, tie allocated staff and other resources to each work activity and to performance goals and define how success is measured for each activity. As the budget is executed, operating plans also are used to compare actual office resources to budget estimates and actual performance to targeted performance, and to identify necessary programmatic and fiscal actions. Based on targets established in the operating plans, individual offices develop quarterly reports on the status of resources and performance. Any performance issues identified in the quarterly reports are discussed with the deputy executive director responsible for that particular office. Generally, when an office meets with its cognizant deputy executive director, it has prepared a course of corrective action it intends to take. However, if an issue is significant, senior staff members will meet with their deputy when they become aware of the issue rather than wait for the quarterly operating plan update. Follow-up actions are incorporated into the next scheduled operating plan meeting as appropriate. The Office of the EDO does not prepare quarterly reports summarizing its review of office operating plans for the Commission. Instead, the Commission is kept informed of operating plan issues throughout the year by various means including Commission meetings, staff papers, the Budget Execution Report, and individual briefings. Finally, performance results are reported annually through a publicly available agency performance report. In Component 4, NRC assesses agency performance. This component is designed to use information from and feed information to other components. Although this component is the least developed of the four components, products are intended to both inform future planning and budget deliberations and further improve performance. (A later section of this report, “Challenges to Improving the NRC Budget and Planning Process,” more fully discusses challenges to improving the assessment component). When fully operational, this component should help NRC to determine whether a program should be continued, restructured, or curtailed and, as designed, may influence planning and budget decisions in Components 1 and 2. In July 2002, NRC proposed that this component include performance reviews conducted for the four major strategic arenas as well as selected management and support offices. However, no decision has been made on who in NRC will conduct these reviews. In addition, individual offices can identify issues during the performance monitoring component that they may select for internal self-assessments during Component 4. PBPM provides NRC with a framework through which it can use performance information to influence planning and resource allocation decisions and is consistent in key respects with our framework for budget practices. NRC informs its resource allocation decisions by providing strategic direction to operating units prior to budget formulation and by monitoring actual performance against performance targets during budget execution. PBPM also promotes agencywide coordination of budget formulation and execution decisions by providing a common language and common goals. A key principle driving PBPM is that the agency’s strategic direction influences internal policy and resource decisions. NRC seeks to use PBPM to identify general strategies to achieve goals, identify programs to implement these strategies, and determine resources to fund and staff programs. NRC practices are similar to those proposed in our framework for budget practices. Under the framework for budget practices, agency management should provide context during budget formulation in the form of general guidance to program managers on proposed agency goals, existing performance issues, and resource constraints—consistent with Components 1 and 2 of PBPM. The following are examples of operation and program decisions that link NRC’s strategic direction with corresponding resource decisions made though PBPM. One of the strategies used to implement the four performance goals in the strategic plan is risk-informed regulation and oversight. This strategy uses risk assessment findings, engineering analysis, and performance history to focus attention on the most important safety-related activities; establishes objective criteria to evaluate performance; develops measures to assess licensee performance; and uses performance results as the primary basis for making regulatory decisions. As part of its risk-informed regulation and oversight strategy, NRC modified its reactor oversight program to help achieve its three subordinate performance goals—developed through Component 1—while maintaining its primary safety goal. The Commission provided guidance throughout the development and implementation of the revised reactor oversight program. This guidance included requirements for staff reporting to the Commission, approval of a pilot program, and instructions for future program development. In one modification to the inspection process, NRC stopped inspecting some elements affecting the plant operators’ work environments (e.g., how well lights in the plant illuminate the operating panel). NRC determined that these factors did not critically contribute to safety and created unnecessary regulatory burdens to industry. Regional officials told us that NRC could now focus on the significant work activities that maintain safety. The reactor oversight program’s procedure for assessing nuclear plants was also changed to increase public confidence in NRC operations by increasing the predictability, consistency, objectivity, and transparency of the oversight process. Each quarter, NRC posts the performance of each nuclear plant on its Web site to provide more information to the public. Regional officials told us that the overall level of resources required to implement the revised reactor oversight program is similar to that of the prior oversight program but that significant changes have occurred in how they manage their inspection program. Specifically, the new inspection procedure includes baseline inspections of all plants but focuses more of the agency’s resources on plants that demonstrate performance problems. Whether the revised reactor oversight program will reduce costs is unknown, but regional officials said that potentially fewer resources may be needed in the future using this approach. NRC established a focus group to identify where or how possible resource savings could occur. As part of its risk-informed regulation and oversight strategy, NRC developed the Risk-Informed Regulation Implementation Plan (RIRIP), which is updated periodically. The first RIRIP, issued in October 2000, examined a range of staff activities including rule making to achieve NRC performance goals. The Commission provided guidance throughout the development and implementation of the new plan, including instructions for future program development as NRC updates the plan. To facilitate its use, the plan is organized around the strategic arenas. Organizing the plan around arenas helps offices to establish priorities and identify resources as part of PBPM. For example, the plan describes activities designed to improve fire protection for nuclear power plants. In this area, NRC plans to develop less prescriptive, more performance-based risk-informed regulations to support its primary goal of safety. NRC is working with industry to study alternatives to existing fire protection standards and emergency postfire shutdown procedures. A senior NRC official gave additional examples of changes NRC has made to its regulations to reduce unnecessary regulatory burden on licensees without compromising safety. He cited the decision to have NRC oversee, but no longer perform, examinations to qualify power plant operators since the industry conducts its own examinations. In addition, this official said NRC eliminated its regulation requiring all nuclear power plants to install state-of-the-art equipment, for example, they could continue to use analog rather than digital equipment, focusing instead on whether use of the current equipment adversely affected safety. NRC also changed its licensing regulations to support its performance goals of reducing unnecessary regulatory burden on licensees and becoming more effective and efficient. One official said NRC changed its regulation governing the length of a power plant license from 40 years to 60 years in some circumstances. Before this change, NRC would only license a power plant for 40 years. At the end of the 40-year license period, the licensee would be required to shut down and decommission the plant. The change in regulation means that NRC will extend the term of a license from 40 to 60 years if it determines through licensing review that existing plant design will support a longer term. According to NRC officials, these license extensions can eliminate extremely large costs to licensees while reducing NRC costs because it is less costly to renew a plant operating license than to review a request for a license for a new power plant. The Commission directed the reorganization of NRC’s three major NRC program offices so that they could become more effective and efficient. For example, in NRR the reorganization established reporting lines consistent with major NRR program functions—inspection, performance assessment, license renewal, and licensing. An NRR official said the previous organizational structure in NRC had contributed to inconsistent processes for inspecting power plants and duplication of work. To address the overall safety goal, NRC developed a program to measure trends in industry nuclear power reactor performance. One part of the safety goal is that there should be no statistically significant adverse industry trends in safety performance. Performance indicators are included in the NRC performance plan and are reported to the Congress through the NRC annual performance report. Resources for this new program are determined through PBPM. NRC uses performance information to inform resource allocation decisions during budget execution by monitoring current year work performance and by adjusting resource allocations as necessary. This practice is consistent with our proposed framework for budget practices. As noted previously, office operating plans track performance against established targets for each planned work activity to call attention to significant performance issues needing corrective action. For example, shortly after September 11, 2001, NRC conducted a comprehensive review of its security program. As part of this review, NRC examined lists of prioritized work activities prepared during the effectiveness review process in Component 2. These lists helped NRC determine which activities to delete or modify as it prepared to use existing resources to respond to security threats in the post-September 11 environment. For example, NRC staffed around-the- clock emergency response centers for significantly longer than originally anticipated. As part of this comprehensive review of its security program, NRC began research on the structural integrity of power plants if they were attacked by large aircraft. NRC also delayed routine inspections at non-power reactors for 3 months to help fund these new activities. In addition, in April 2002, NRC established NSIR to streamline selected NRC security, safeguards, and incident response responsibilities and related resources. Operating plans are also used to monitor performance and make necessary adjustments. For example, NRR discovered that the May 2000 operating plan report showed plant license renewal applications and associated staff years well below annual expected target levels that year. NRR was thus able to shift resources to other priorities. An NRR official said this example showed NRR the importance of monthly monitoring of the budget assumptions prior to the beginning of the fiscal year. Furthermore, in another example, NRR management officials also reviewed the fiscal year 2002 first quarter operating plan report and found that the workload impact from the September 11 attacks would prevent NRR from achieving annual licensing action targets. These officials redirected additional staff resources to complete these licensing actions. As a result, the third quarter projection is that NRR will slightly exceed its annual target for these actions. PBPM is designed to enhance cooperation and coordination among offices. This practice matches our proposed framework for budget practices, which states that agency managers should share information on policy and programs among offices during budget decision making. Sharing information during budgeting is important because many offices share responsibilities for achieving NRC goals. NRC office managers said they coordinate their work with others to determine if necessary skills are already available elsewhere in the agency. For example, one official said he relies on another unit’s expertise in conducting environmental studies. In another example, regional officials reported that they occasionally share specialized staff with other regions to perform nonroutine inspections. PBPM provides NRC with reference points such as common goals, performance measures, and strategies that help offices communicate and reach agreement on budget priorities. For example, NRR, which depends upon research studies conducted by RES, meets regularly with that office to discuss program and budget priorities for risk analysis, structural integrity, and new reactor designs. NRR also meets with other offices as it develops its budget proposal to coordinate its resource requests for mutually agreed-upon priorities. For instance, NRR shares information with NMSS to ensure that crosscutting activities, such as rule making, have adequate resources. In addition, the NRC crosswalk of all program activities into strategic arenas allows NRC to clarify the relationship between budget requests and agency goals. Our report on federal agency efforts in linking performance plans with budgets found that NRC’s budget presentation linked its program activities to performance goals, which showed funding needed to achieve goals. NRC uses the arena reporting structure to communicate its budget needs to audiences outside the agency, including OMB and the Congress. When it introduced PBPM, NRC recognized that continued development of the process would be necessary. After gaining experience for several years, NRC is now in the process of addressing several challenges to PBPM implementation. Agency officials noted challenges in (1) creating performance measures that balance competing goals and keep performance measures current, (2) associating resource requests with outcomes, (3) standardizing PBPM practices and techniques but still allowing individual offices to tailor the process to their needs, (4) developing the assessment component, and (5) committing significant effort to maintaining PBPM. In addition, NRC must continue developing a cost accounting system to support PBPM. As NRC officials create new performance measures or redesign existing measures, they find it a challenge to refine performance measures so that they balance performance goals. While safety is a paramount goal, NRC also seeks to progress in reducing unnecessary regulatory burden on the industry and improving public confidence in NRC’s operations. One official said it is a balancing act to minimize the time and steps it takes to license a facility while at the same time being sure that the agency is licensing a safe operation. Several NRC officials also said current performance measures track office efficiency well but capture the quality of license review poorly. NRC officials said they are beginning to develop performance measures that better capture quality. For example, NRR is now using a template to assess the quality of its evaluation of safety issues during review of licensing actions. Officials believe that when measures of quality are in place, they can be used to determine whether adjusting budget resources will have an effect on the quality of their activities. New strategies, such as risk-based regulation and oversight programs, can dictate changes in performance measures. NRC must also keep its performance measures relevant as the industry changes. Several examples illustrate these points. NRC plans to develop new performance measures for reviewing applications to upgrade power output from existing plants because of concern that existing measures did not accurately measure NRC performance in this area. In another example, NRC is studying new performance measures to determine if it can predict, and thus avoid, emergent problems in the Reactor Oversight Program. NRC and industry representatives jointly developed a new set of performance indicators to measure availability of nuclear plant safety systems. NRC believes the new performance indicators will provide more accurate risk assessments. NRC officials said that linking outcomes to resources is challenging for several reasons. First, the budget process focuses on performance targets and budget decisions for the short term while achieving some outcomes may take many years. Therefore, it is difficult to know the incremental effect of adjusting resources annually for longer-term outcomes. For example, one official noted that research leading to safer reactor design takes many years to bear fruit. Agency officials said linking outcomes to resources is also difficult because achieving many agency goals depends on the actions of others not directly under NRC’s control. NRC’s strategic plan states that achieving its strategic goals requires the collective efforts of NRC, licensees, and the agreement states. Yet, as one NRC official noted, neither NRC nor stakeholder representatives could identify how much each contributes to achieving NRC strategic goals. Nonetheless, this official said that both NRC and stakeholders strongly believe in establishing quantifiable outcome measures so that all stakeholders understand NRC’s goals. While the particular links and interdependencies are specific to NRC, many of these challenges permeate federal agencies. Many federal programs depend on other actors. For many federal activities ultimate outcomes are years away, but ways must be found to evaluate progress and make resource decisions annually. A continuing challenge during PBPM implementation is to determine which process techniques and information should be standardized across offices. For example, NRC officials said the major program offices use different procedures and methodologies to rank the contribution of their work activities to achieving NRC performance goals. Nonstandard weighing of priorities has made cross-office comparisons of activities and related resource allocation decisions more challenging for NRC officials. NRC officials said they established a task force to develop a common methodology to prioritize the contributions of the major program offices to NRC goals. They said their goal is to have aspects of a common ranking process among the major program offices for the fiscal year 2005 budget. In addition, NRC is in the process of further defining the roles and responsibilities of participants in PBPM through a management directive. In a related example, an NRC official said the agency faces a challenge to improve comparison of performance measures across both major program and mission support offices. Major NRC program offices are required to include agency strategic goals and performance goal measures in their annual operating plans. These measures are reported in the annual performance report by strategic arena. However, mission support offices are not required to report on these strategic performance goals. In addition, each office has been permitted to develop additional, office- specific, detailed performance measures to provide supplemental management information. NRC officials describe NRC’s current assessment process as the weakest component of PBPM. These officials said existing guidance does not adequately describe what an assessment is or how to select programs for evaluation. Since there is not a clear definition of what qualifies as an assessment within Component 4, NRC performance reports vary and may not capture the full range of assessments that occurred or are planned at NRC. Because information contained in assessments is intended to inform the other PBPM components, NRC officials see the performance assessment component as a critical element of its process. For example, performance assessments can capture key information on how the agency is performing that can be used for setting the agency’s strategic direction. This practice, consistent with our framework for budget practices, can help NRC to seek continual improvement by evaluating current program performance and identifying alternative approaches to better achieve agency goals. NRC is taking steps to improve its assessment process by developing a new procedure for selecting programs and activities for evaluation. In July 2002, NRC established annual performance reviews for the four major strategic arenas and an annual assessment plan that identifies subjects for evaluation during the upcoming fiscal year. Programs will be selected for evaluation where a strong potential exists for performance improvement, cost reduction, or both. Results of the program evaluations will inform the next strategic direction phase of PBPM and may also result in changes during the performance monitoring process. Agency officials describe the introduction of PBPM as a culture shift requiring a commitment of time and effort by NRC employees. NRC officials said the agency sought to facilitate this cultural change by holding staff meetings at all levels and by using task force working groups to introduce PBPM. The introduction and evolution of PBPM also presents a continuing workload challenge to NRC. For example, one official said the detailed work associated with PBPM had been added to reporting requirements already in place. Nevertheless, key officials reported that implementing PBPM has been worth the time and effort because it provides a framework for more informed and focused resource allocation decisions. According to one official, PBPM has resulted in agency officials asking the key questions about why and how they conduct an activity. NRC faces the challenge of developing a cost accounting system that can support budget decision making. Developing a cost accounting system is important to budget decision making because it can help managers track direct, indirect, and unit costs of activities and compare the cost of activities to appropriate benchmarks. The October 2001 NRC Managerial Cost Accounting Remediation Plan noted that the prior accounting system supported general financial reporting but did not include a managerial cost accounting system. An example in the remediation plan states that labor hour tracking systems were not integrated with payroll systems. NRC officials said the agency has since developed a cost accounting system to help in resource allocation decisions. They said the new system will integrate payroll and nonpayroll costs at a level that will enable NRC to compare total direct costs of work activities with appropriate benchmarks. However, officials told us that they only started using the cost accounting system in the first two quarters of fiscal year 2002 and plan to refine the information collected based on what is the most useful and relevant. Agency officials estimate that fully implementing the system will take 4 to 5 years. We requested comments on a draft of this report from NRC. NRC expressed appreciation for our recognition of its efforts and progress and the fact that we note consistencies with our framework for budget practices. NRC expressed some concern about our report underrecognizing how far beyond conceptual stage PBPM is, about our statement that a good cost accounting system was necessary, and about our reference to operating plans. We modified our language to clarify our views on the implementation of PBPM. The agency’s letter and our response are contained in appendix I. NRC officials also provided clarifying comments, which we have incorporated in the report as appropriate. We are sending copies of this report to the Chairman of the Nuclear Regulatory Commission and will make copies available to other interested parties upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me on (202) 512-9573 or Denise Fantone, Assistant Director, on (202) 512-4997 if you or your staff has any questions about this report. Major contributors to this report are Robert Hadley, James Whitcomb, and Robert Yetvin. The following are GAO’s comments on the Nuclear Regulatory Commission’s (NRC) letter dated November 22, 2002. 1. Our point is not that the Planning, Budgeting, and Performance Management Process is still at a conceptual stage but rather that implementation is in various stages throughout NRC, and that refinement of agencywide implementation is still necessary. This is consistent with what we were told and saw at NRC. We modified wording to clarify this point. (See pp. 4 and 9.) 2. We consistently have said that good cost accounting is critical to linking resources to results/outcomes. For example, in our recent testimony on performance budgeting we said that the integration of reliable cost accounting data into budget debates needs to become a key part of the performance budgeting agenda. 3. NRC uses operating plans to set milestones, track progress, and make adjustments to improve program outcomes. This is—and was so described in our interviews at NRC— an important part of PBPM. 4. The footnote was modified to clarify that this report neither observed nor evaluated reported safety problems in the Davis-Besse power plant. (See p. 9.) The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to GAO Mailing Lists” under “Order GAO Products” heading.
Encouraging a clearer and closer link between budgeting and planning is essential to improving federal management and instilling a greater focus on results. Through work at various levels within the organization, this report on the Nuclear Regulatory Commission (NRC)--and its two companion studies on the Administration for Children and Families (GAO-03-09) and the Veterans Health Administration (GAO-03-10)--documents (1) what managers considered successful efforts at creating linkages between planning and performance information to influence resource choices and (2) the challenges managers face in creating these linkages. Although in differing stages of implementation throughout NRC, NRC designed the Planning, Budgeting, and Performance Management Process (PBPM) to better integrate its strategic planning, budgeting, and performance management processes. PBPM links four individual components: (1) setting the agency's strategic direction, (2) determining activities and performance targets of component offices and related resources, (3) executing the budget and monitoring performance targets and taking corrective actions, if needed, to achieve those targets, and (4) assessing agency progress toward achieving its goals. GAO's report provides examples of how the PBPM framework can influence budget formulation and execution decisions. These examples show (1) how NRC informs its resource allocation decisions by providing strategic direction to operating units prior to budget formulation, (2) how operating units that have implemented these processes link strategic direction to budgets through tools that set priorities and assign resources to office activities to accomplish these priorities, and (3) how operating units monitor performance targets and make adjustments as necessary during budget execution. In addition, agency managers have told GAO that PBPM also promotes agencywide coordination of budget formulation and execution decisions by providing a common language and common goals. Integrating budget and planning processes and improving performance management in NRC is an ongoing effort that includes addressing a series of challenges. They are (1) creating performance measures that balance competing goals and keep performance measures current, (2) associating resource requests with outcomes, (3) standardizing PBPM practices and techniques but still allowing some flexibility among offices to tailor the process to their needs, (4) developing the assessment component, and (5) committing significant effort to maintain PBPM. In addition, NRC must continue developing a cost accounting system to support PBPM.
The quality data submitted by hospitals are collected from the medical records of patients admitted to the hospital. Hospital patient medical records contain many different types of information, which are organized into different sections. Frequently found examples of these sections include the face sheet, which summarizes basic demographic and billing data, including diagnostic codes; history and physicals (H&P), which record both patient medical history physician orders, which show what medications, tests, and procedures were ordered by a physician; medication administration records (MAR), which show that a specific medication was given to a patient, when it was given, and the dosage; laboratory reports, radiology reports, and test results, such as an echocardiogram reading; progress notes, in which physicians, nurses, and other clinicians record information chronologically on patient status and response to treatments during the patient’s hospital stay; operative reports for surgery patients; physician and nursing notes for patients treated in the emergency discharge summaries, in which a physician summarizes the patient’s hospital stay and records prescriptions and instructions to be given to the patient at discharge. Hospitals have discretion to determine the structure of their patient medical records, as well as to set general policies stating what, where, and how specific information should be recorded by clinicians. To guide the hospital staff in the abstraction process—that is, in finding and properly assessing the information in the patient’s medical record needed to fill in the values for the data elements—CMS and the Joint Commission have jointly issued a Specifications Manual. It contains detailed specifications that define the data elements for which the hospital staff need to collect information and determine values and the correct interpretation of those data elements. The Joint Commission also requires hospitals to submit the same data that they submit to CMS for the APU program (and some additional data) to receive Joint Commission accreditation. In many hospitals, information in a patient’s medical record is recorded and stored in a combination of paper and electronic systems. Patient medical records that clinicians record on paper may be stored in a folder in the hospital’s medical record department and contain all the different forms, reports, and notes prepared by different individuals or by different departments during the patient’s stay. Depending on the length of the patient’s hospital stay and the complexity of the care, an individual patient medical record can amount to hundreds of pages. For information stored electronically, clinicians may enter information directly into the electronic record themselves, as they do for paper records, or they may dictate their notes to be transcribed and added to the electronic record later. Information may also be recorded on paper and then scanned into the patient’s electronic record. For example, if a patient is transferred from another hospital, the paper documents from the transferring hospital may be scanned into the patient’s electronic record. The patient medical information that hospitals store electronically, rather than on paper, typically resides in multiple health IT systems. One set of IT systems usually handles administrative tasks such as patient registration and billing. Hospitals acquire other IT systems to record laboratory test results, to store digital radiological images, to process physician orders for medications, and to record notes written by physicians and nurses. Hospitals frequently build their health IT capabilities incrementally by adding new health IT systems over time. If the systems that hospitals purchase come from different companies, they are likely to be based on varying standards for how the information is stored and exchanged electronically. As a result, even in a single hospital, it can be difficult to access from one IT system clinical data stored in a different health IT system. One of the main objectives of ONC is to overcome the problem of multiple health IT systems, within and across health care providers, that store and exchange information according to varying standards. The mission of ONC is to promote the development and nationwide implementation of interoperable health IT in both the public and the private sectors in order to reduce medical errors, improve quality of care, and enhance the efficiency of health care. Health IT is interoperable when systems are able to exchange data accurately, effectively, securely, and consistently with different IT systems, software applications, and networks in such a way that the clinical or operational purposes and meaning of the data are preserved and unaltered. The case study hospitals we visited used six steps to collect and submit quality data, two of which involved complex abstraction—the process of reviewing and assessing all relevant pieces of information in a patient’s medical record to determine the appropriate value for each data element. Factors accounting for the complexity of the abstraction process included the content and organization of the medical record, the scope of information required for the data elements, and frequent changes by CMS in its data specifications. Due in part to these complexities, most of our case study hospitals relied on clinical staff to abstract the quality data. Increases in the number of required quality measures led to increased demands on clinical staff resources. However, all case study hospitals reported finding benefits in the quality data that helped to offset the demands placed on clinical staff. We found that whether patient information was recorded electronically, on paper, or as a mix of both, all the case study hospitals collected and submitted their quality data by carrying out six sequential steps (see fig. 1). These steps started with identifying the patients for whom the hospitals needed to provide quality data to CMS and continued through the process of examining each patient’s medical record, one after the other, to find the information needed to determine the appropriate values for each of the required data elements for that patient. Then, for each patient, those values were entered by computer into an electronic form or template listing each of the data elements for that condition. These forms were provided by the data vendor with which the hospital had contracted to transmit its quality data to CMS. The vendors also assisted the hospitals in checking that the data were successfully received by CMS. Finally, the hospitals sent copies of the medical records of a selected sample of patients to a CMS contractor that used those records to validate the accuracy of the quality data submitted by the hospital. Specifically, the six steps, which are summarized for each case study hospital in appendix III, table 2, were as follows: Step 1: Identify patients—The first step was to identify the patients for whom the hospitals needed to submit quality data to CMS. Staff at three case study hospitals identified these patients using information on the patient’s principal diagnosis, or principal procedure in the case of surgery patients, obtained from the hospital’s billing data. Five case study hospitals had their data vendor use the hospital’s billing data to identify the eligible patients for them. Every month, all eight hospitals that we visited identified patients discharged in the prior month for whom quality data should be collected. The hospitals identified all patients retrospectively for quality data collection because hospitals have to wait until a patient is discharged to determine the principal diagnosis. CMS permits hospitals to reduce their data collection effort by providing quality data for a representative sample of patients when the total number of patients treated for a particular condition exceeds a certain threshold. Five case study hospitals drew samples for at least one condition. The data vendor performed this task for four of those case study hospitals, and assisted the hospital in performing this task for the fifth hospital. Only one of the case study hospitals reported using nonbilling data sources to check the accuracy of the lists of patients selected for quality data collection that the hospitals drew from their billing data (see app. III, table 3). Several stated that they occasionally noted discrepancies, such as patients selected for heart attack measures who, upon review of their medical record, should not have had that as their principal diagnosis. However, the hospital officials we interviewed told us that discrepancies of this sort were likely to be minor. Officials at three hospitals noted that hospitals generally have periodic routine audits conducted of the coding practices of their medical records departments, which would include the accuracy of the principal diagnoses and procedures. Step 2: Locate information in the medical record—Steps 2 and 3 were in practice closely linked in our case study hospitals. Abstractors at the eight case study hospitals examined each selected patient’s medical record, looking for all of the discrete pieces of information that, taken together, would determine what they would decide—in step 3—was the correct value for each of the data elements. For some data elements, there was a one-to-one correspondence between the piece of information in the medical record and the value to be entered. Typical examples included a patient’s date of birth and the name of a medication administered to the patient. For other data elements, the abstractors had to check for the presence or absence of multiple pieces of information in different parts of the medical record to determine the correct value for that data element. For example, to determine if the patient did, or did not, have a contraindication for aspirin, abstractors looked in different parts of the medical record for potential contraindications, such as the presence of internal bleeding, allergies, or prescriptions for certain other medications such as Coumadin. In order for abstractors to find information in the patient’s medical record, it had to be recorded properly by the clinicians providing the patient’s care. Officials at all eight case study hospitals described efforts designed to educate physicians and nurses about the specific data elements for which they needed to provide information in each patient’s medical record. The hospital officials were particularly concerned that the clinicians not undermine the hospital’s performance on the quality measures by inadequately documenting what they had done and the reasons why. For example, one heart failure measure tracks whether a patient received each of six specific instructions at the time of discharge, but unless information was explicitly recorded in a heart failure patient’s medical record for each of the six data elements, that patient was counted by CMS as one who had not received all pertinent discharge instructions and therefore did not meet that quality measure. This particular measure was cited by officials at several hospitals as one that required a higher level of documentation than had previously been the norm at their hospital. Step 3: Determine appropriate data element values—Once abstractors had located all the relevant pieces of information pertaining to a given data element, they had to put those pieces together to arrive at the appropriate value for the data element. The relevance of that information was defined by the detailed instructions provided by the hospitals’ vendors, as well as the Specifications Manual jointly issued by CMS and the Joint Commission that serves as the basis for the vendor instructions. The Specifications Manual sets out the decision rules for choosing among the allowable values for each data element. It also identifies which parts of the patient’s medical record may or may not provide the required information, and often lists specific terms or descriptions that, if recorded in the patient’s medical record, would indicate the appropriate value for a given data element. In addition, the Specifications Manual provides abstractors with guidance on how to interpret conflicting information in the medical record, such as a note from one clinician that the patient is not a smoker and a note elsewhere in the record from another clinician that the patient does smoke. To help keep track of multiple pieces of information, many abstractors reported that they first filled in the data element values on a paper copy of the abstraction form provided by the data vendor. In this way, they could write notes in the margin to document how they came to their conclusions. Step 4: Transmit data to CMS—In order for the quality data to be accepted by the clinical data warehouse, they must pass a battery of edit checks that look for missing, invalid, or improperly formatted data element entries. All the case study hospitals contracted with data vendors to submit their quality data to CMS. They did so, in part, because all of the hospitals submitted the same data to the Joint Commission, and it requires hospitals to submit their quality data through data vendors that meet the Joint Commission’s requirements. The additional cost to the hospitals to have the data vendors also submit their quality data to CMS was generally minimal (see app. III, table 3). All of the case study hospitals submitted their data to the data vendor by filling in values for the required data elements on an electronic version of the vendor’s abstraction form. Many abstractors did this for a batch of patient records at a time, working from paper copies of the form that they had filled in previously. Some abstractors entered the data online at the same time that they reviewed the patient’s medical records. In other cases, someone other than the abstractor who filled in the paper form used the completed form to enter the data on a computer. Step 5: Ensure data have been accepted by CMS—The case study hospitals varied in the extent to which they actively monitored the acceptance of their quality data into CMS’s clinical data warehouse. After the data vendors submitted the quality data electronically, they and the hospitals could download reports from the clinical data warehouse indicating whether the submitted data had passed the screening edits for proper formatting and valid entries. The hospitals could use these reports to detect data entry errors and make corrections prior to CMS’s data submission deadline. Three case study hospitals shared this task with their data vendors, three hospitals left it for their data vendors to handle, and two hospitals received and responded to reports on data edit checks produced by their data vendors, rather than reviewing the CMS reports. Approximately 2 months after hospitals submitted their quality data, CMS released reports to the hospitals showing their performance scores on the quality measures before posting the results on its public Web site. Step 6: Supply copies of selected medical records—CMS has put in place a data validation process to ensure the accuracy of hospital quality data submissions. It requires hospitals to supply a CMS contractor with paper copies of the complete medical record for five patients selected by CMS each quarter. Officials at five hospitals noted that they check to make sure that all parts of the medical records that they used to abstract the data originally are included in the package shipped to the CMS contractor. Most of the case study hospitals relied on CMS’s data validation to ensure the accuracy of their abstractions. However, two hospitals reported that they also routinely draw their own sample of cases, which are abstracted a second time by a different abstractor in the hospital, followed by a comparison of the two sets of results (see app. III, table 3). The description by hospital officials of the processes they used to collect and submit quality data indicated that locating the relevant clinical information and determining appropriate values for the data elements (steps 2 and 3) were the most complex steps of the six identified, due to several factors. These included the content and organization of the medical record, the scope of the information encompassed by the data elements, and frequent changes in data specifications. The first complicating factor related to the medical record was that the information abstractors needed to determine the correct data element values for a given patient was generally located in many different sections of the patient’s medical record. These included documents completed for admission to the hospital, emergency department documents, laboratory and test results, operating room notes, medication administration records, nursing notes, and physician-generated documents such as history and physicals, progress notes and consults, orders for medications and tests, and discharge summaries. In addition, the abstractors may have had to look at documents that came from other providers if the patient was transferred to the hospital. Much of the clinical information needed was found in the sections of the medical record prepared by clinicians. Often the information in question, such as contraindications for aspirin or beta blockers, could be found in any of a number of places in the medical record where clinicians made entries. As a result, abstractors frequently had to read through multiple parts of the record to find the information needed to determine the correct value for just one data element. At two case study hospitals, abstractors said that they routinely read each patient’s entire medical record. Experienced abstractors often knew where they were most likely to find particular pieces of information. They nevertheless also had to check for potentially contradictory information in different parts of the medical record. For example, as noted, patients may have provided varying responses about their smoking history to different clinicians. If any of these responses indicated that the patient had smoked cigarettes in the last 12 months, the patient was considered to be a smoker according to CMS’s data specifications. Another example concerns the possibility that a heart attack or heart failure patient may have had multiple echocardiogram results recorded in different parts of the medical record. Abstractors needed to find all such results in order to apply the rules stated in the Specifications Manual for identifying which result to use in deciding whether the patient had left ventricular systolic dysfunction (LVSD). This data element is used for the quality measure assessing whether an angiotensin-converting enzyme inhibitor (ACEI) or angiotensin receptor blocker (ARB) was prescribed for LVSD at discharge. The second factor was related to the scope of the information required for certain data elements. Some of the data elements that the abstractors had to fill in represented a composite of related data and clinical judgment applied by the abstractor, not just a single discrete piece of information. Such composite data elements typically were governed by complicated rules for determining the clinical appropriateness of a specific treatment for a given patient. For example, the data element for contraindications for both ACEIs and ARBs at discharge requires abstractors to check for the presence and assess the severity of any of a range of clinical conditions that would make the use of either ACEIs or ARBs inappropriate for that patient. (See fig. 2.) These conditions may appear at any time during the patient’s hospital stay and so could appear at any of several places in the medical record. Abstractors must also look for evidence in the record from a physician linking a decision not to prescribe these drugs to one or more of those conditions. The third factor is the necessity abstractors at the case study hospitals faced to adjust to frequent changes in the data specifications set by CMS. Since CMS first released its detailed data specifications jointly with the Joint Commission in September 2004, it has issued seven new versions of the Specifications Manual. Therefore, from fall 2004 through summer 2006, roughly every 3 months hospital abstractors have had to stop and take note of what had changed in the data specifications and revamp their quality data collection procedures accordingly. Some of these changes reflected modifications in the quality measures themselves, such as the addition of ARBs for treatment of LVSD. Other changes revised or expanded the guidance provided to abstractors, often in response to questions submitted by hospitals to CMS. CMS recently changed its schedule for issuing revisions to its data specifications from every 3 months to every 6 months, but that change had not yet affected the interval between new revisions issued to hospitals at the time of our case study site visits. Case study hospitals typically used registered nurses (RN), often exclusively, to abstract quality data for the CMS quality measures (see app. III, table 3). One hospital relied on a highly experienced licensed practical nurse, and two case study hospitals used a mix of RNs and nonclinical staff. Officials at one hospital noted that RNs were familiar with both the nomenclature and the structure of the hospital’s medical records and they could more readily interact with the physicians and nurses providing the care about documentation issues. Even when using RNs, all but three of the case study hospitals had each abstractor focus on one or two medical conditions with which they had expertise. Four hospitals had tried using nonclinical staff, most often trained as medical record coders, to abstract the quality data. Officials at one of these hospitals reported that this approach posed challenges. They said that it was difficult for nonclinical staff to learn all that they needed to know to abstract quality data effectively, especially with the constant changes being made to the data specifications. At the second hospital, officials reported that using nonclinical staff for abstraction did not work at all and they switched to using clinically trained staff. At the third hospital, the chief clinician leading the quality team stated that the hospital’s nonclinical abstractors worked well enough when clinically trained colleagues were available to answer their questions. Officials at the fourth hospital cited no concerns about using staff who were not RNs to abstract quality data, but they subsequently hired an RN to abstract patient records for two of the four conditions. Case study hospitals drew on a mix of existing and new staff resources to handle the collection and submission of quality data to CMS. In two hospitals, new staff had been hired specifically to collect quality data for the Joint Commission and CMS. In other hospitals, quality data collection was assigned to staff already employed in the hospital’s quality management department or performing other functions. All the case study hospitals found that, over time, they had to increase the amount of staff resources devoted to abstracting quality data for the CMS quality measures, most notably as the number of measures on which they were submitting data expanded. Officials at the case study hospitals generally reported that the amount of staff time required for abstraction increased proportionately with the number of conditions for which they reported quality data. The hospitals had all begun to report most recently on the surgical quality measures. They found that the staff hours needed for this new set of quality measures were directly related to the number of patient records to be abstracted and the number of data elements collected. In other words, they found no “economies of scale” as they expanded the scope of quality data abstraction. At the time of our site visits, four hospitals continued to draw on existing staff resources, while others had hired additional staff. Hospital officials estimated that the amount of staff resources devoted to abstracting data for the CMS quality measures ranged from 0.7 to 2.5 full-time equivalents (FTE) (app. III, table 3). Hospital officials reported that the demands that quality data collection and submission placed on their clinical staff resources were offset by the benefits that they derived from the resulting information on their clinical performance. Each one had a process for tracking changes in their performance over time. Based on those results, they provided feedback to individual clinicians and reports to hospital administrators and trustees. Because they perceived feedback to clinicians to be much more effective when provided as soon as possible, several of the case study hospitals found ways to calculate their performance on the quality measures themselves, often on a monthly basis, rather than wait for CMS to report their results for the quarter. Officials at all eight case study hospitals pointed to specific changes they had made in their internal procedures designed to improve their performance on one or more quality measures. Most of the case study hospitals developed “standing order sets” for particular diagnoses. Such order sets provide a mechanism for standardizing both the care provided and the documentation of that care, in such areas as prescribing beta blockers and aspirin on arrival and at discharge for heart attack patients. Another common example involved prompting physicians to administer pneumococcal vaccinations to pneumonia patients. However, at most of the case study hospitals, use of many standing order sets was optional for physicians, and hospital officials reported widely varying rates of physician use, from close to 100 percent of physicians at one hospital using its order set for heart attack patients to just a few physicians using any order sets in another hospital. Case study hospitals also responded to the information generated from their quality data by adjusting their treatment protocols, especially for patients treated in their emergency departments. For example, five hospitals developed or elaborated on procedural checklists for emergency department nurses treating pneumonia patients. The objective of these changes was to more quickly identify pneumonia patients when they arrived at the emergency department and then expeditiously perform required blood tests so that the patients would score positively for the quality measure on receiving antibiotics within 4 hours of arrival at the hospital. Three hospitals strengthened their procedures to identify smokers and make sure that they received appropriate counseling. Hospital officials noted that they provided quality of care data to entities other than CMS and the Joint Commission, such as state governments and private insurers, but for the most part they reported that the CMS quality measures had two advantages. First, the CMS quality measures enabled hospitals to benchmark their performance against the performances of virtually every other hospital in the country. Second, officials at two hospitals noted that the CMS measures were based on clinical information obtained from patient medical records and therefore had greater validity as measures of quality of care than measures based solely on administrative data. Many hospital officials said that they wished that state governments and other entities collecting quality data would accept the CMS quality measures instead of requiring related quality data based on different definitions and patient populations. Hospital officials in two states reported some movement in that direction. In the case studies, existing IT systems helped hospital abstractors to complete their work more quickly, but the limitations of those IT systems meant that trained staff still had to examine the entire patient medical record and manually abstract the quality data submitted to CMS. IT systems helped abstractors obtain information from patients’ medical records, in particular by improving their accessibility and legibility, and by enabling hospitals to incorporate CMS’s required data elements into those medical records. The challenges reported by hospital officials included having a mix of paper and electronic records, which required abstractors to check multiple places to get the needed information; the prevalence of unstructured data, which made locating the information time-consuming because it was not in a prescribed place in the record; and the presence of multiple IT systems that did not share data, which required abstractors to separately access each IT system for related pieces of information that were in different parts of the medical record. While hospital officials expected the scope and functionality of their IT systems to increase over time, they projected that this would occur incrementally over a period of years. Hospitals found that their existing IT systems could facilitate the collection of quality data, but that there were limits on the advantages that the systems could provide. IT systems, and the electronic records they support, offered hospitals two key benefits: (1) improving accessibility to and legibility of the medical record, and (2) facilitating the incorporation of CMS’s required data elements into the medical record. Many hospital abstractors noted that existing electronic records helped quality data collection by improving accessibility and legibility of patient records. In general, paper records were less accessible than electronic records because it took time to find them or to have them transported if hospitals had stored them in a remote location after the patients were discharged. Also, paper records were more likely to be missing or in use by someone else. However, in one case study hospital, an abstractor noted difficulties in gaining access to a computer terminal to view electronic medical records. Many abstractors noted improvements in legibility as a fundamental benefit of electronic records. This advantage applied in particular to the many sections of the medical record that consisted of handwritten text, including history and physicals, progress notes, medication administration records, and discharge summaries. Some hospitals have used their existing IT systems to facilitate the abstraction of information by designing a number of discrete data fields that match CMS’s data elements. For example, two hospitals incorporated prompts for pneumococcal vaccination in their electronic medication ordering system. These prompts not only reminded physicians to order the vaccination (if the patient was not already vaccinated) but also helped to insure documentation of the patient’s vaccination status. One hospital developed a special electronic discharge program for heart attack and heart failure patients that had data elements for the quality measures built into it. Another hospital built a prompt into its electronically generated discharge instructions to instruct patients to measure their weight daily. This enabled the hospital to document more consistently one of the specific instructions that heart failure patients are supposed to receive on discharge but that physicians and nurses tended to overlook in their documentation. The limitations that hospital officials reported in using existing IT systems to collect quality data stemmed from having a mix of paper and electronic systems; the prevalence of data recorded in IT systems as unstructured paragraphs of narrative or text, as opposed to discrete data fields reserved for specific pieces of information; and the inability of some IT systems to access related data stored on another IT system in the same hospital. Because all but one of the case study hospitals stored clinical records in a mix of paper and electronic systems, abstractors generally had to consult both paper and electronic records to obtain all needed information. What was recorded on paper and what was recorded electronically varied from hospital to hospital (see app. III, table 4). However, admissions and billing data were electronic at all the case study hospitals. Billing data include principal diagnosis and birth date, which are among the CMS-required data elements. With regard to clinical data, all case study hospitals had test results, such as echocardiogram readings, in an electronic form. In contrast, nurse progress notes were least likely to be in electronic form at the case study hospitals. Moreover, it was not uncommon for a hospital to have the same type of clinical documentation stored partly in electronic form and partly on paper. For example, five of the eight case study hospitals had a mix of paper and electronic physician notes, reflecting the differing personal preferences of the physicians. Discharge summaries and medication administration records, on the other hand, tended to be either paper or electronic at a given hospital. Many of the data in existing IT systems were recorded in unstructured formats—that is, as paragraphs of narrative or other text, rather than in data fields designated to contain specific pieces of information—which created problems in locating the needed information. For example, physician notes and discharge summaries were often dictated and transcribed. Abstractors typically read through the entire electronic document to make sure that they had found all potentially relevant references, such as for possible contraindications for a beta blocker or an ACEI. By contrast, some of the data in existing IT systems were in structured data fields so that specific information could be found in a prescribed place in the record. One common example was a list of medication allergies, which abstractors used to quickly check for certain drug contraindications. However, officials at several hospitals said that developing and implementing structured data fields were labor intensive, both in terms of programming and in terms of educating clinical staff in their use. That is why many of the data stored in electronic records at the case study hospitals remained in unstructured formats. Another limitation with existing IT systems was the inability of some systems to access related data stored on another IT system in the same hospital. This situation affected six of the eight case study hospitals to some degree. For example, one hospital had an IT system in the emergency department and an IT system on the inpatient floors, but the two systems were independent and the information in one was not linked to the information in the other. Abstractors had to access each IT system separately to obtain related pieces of information, which made abstraction more complicated and time-consuming. Existing IT systems helped hospital abstractors to complete their work more quickly, but the limitations of those IT systems meant that, for the most part, the nature of their work remained the same. Existing IT systems enabled abstractors at several hospitals to more quickly locate the clinical information needed to determine the appropriate values for at least some of the data elements that the hospitals submitted to CMS. Where hospitals designed a discrete data field in their IT systems to match a specific CMS data element, abstractors could simply transcribe that value into the data vendor’s abstraction form. However, in all the case study hospitals there remained a large number of data elements for which there was no discrete data field in a patient’s electronic record that could provide the required value for that data element. As a result, trained staff still had to examine the medical record as a whole and manually abstract the quality data submitted to CMS, whether the information in the medical record was recorded electronically or on paper. All the case study hospitals were working to expand the scope and functionality of their IT systems, but this expansion was generally projected to occur incrementally over a period of years. Hospital officials noted that with wider use of IT systems, the advantages of these systems—including accessibility, legibility, and the use of discrete data fields—would apply to a larger proportion of the clinical records that abstractors have to search. As the case study hospitals continue to bring more of their clinical documentation into IT systems, and to link separate systems within their hospital so that data in one system can be accessed from another, it should reduce the time required to collect quality data. However, most officials at the case study hospitals viewed full-scale automation of quality data collection and submission through implementation of IT systems as, at best, a long-term prospect. They pointed to a number of challenges that hospitals would have to overcome before they could use IT systems to achieve full-scale automation of quality data collection and submission. Primary among these were overcoming physician reluctance to use IT systems to record clinical information and the intrinsic complexity of the quality data required by CMS. One hospital with unusually extensive IT systems had initiated a pilot project to see how close it could get to fully automating quality data collection for patients with heart failure. Drawing to the maximum extent on the data that were amenable to programming, which excluded unstructured physician notes, the hospital found that it could complete data collection for approximately 10 percent of cases without additional manual abstraction. Reflecting on this effort, the hospital official leading this project noted that at least some of the data elements required for heart failure patients represented “clinical judgment calls.” An official at another hospital observed that someone had to apply CMS’s complex decision rules to determine the appropriate value for the data elements. If a hospital wanted to eliminate the need for an abstractor, who currently makes those decisions retrospectively after weighing multiple pieces of information in the patient’s medical record, the same complex decisions would have to be made by the patient’s physician at the time of treatment. The official suggested that it was preferable not to ask physicians to take on that additional task when they should be focused on making appropriate treatment decisions. Another barrier to automated quality data collection mentioned by several hospital officials was the frequency of change in the data specifications. As noted above, hospitals had to invest considerable staff resources for programming and staff education to develop structured data fields for the clinical information required for the data elements. Officials at one hospital stated that it would be difficult to justify that investment without knowing how long the data specifications underlying that structured data field would remain valid. CMS has sponsored studies and joined HHS initiatives to examine and promote the current and potential use of hospital IT systems to facilitate the collection and submission of quality data, but HHS lacks detailed plans, including milestones and a time frame against which to track its progress. CMS sponsored two studies that examined the use of hospital IT systems for quality data collection and submission. Promoting the use of health IT for quality data collection is also 1 of 14 objectives that HHS has identified in its broader effort to encourage the development and nationwide implementation of interoperable IT in health care. CMS has joined this broader effort by HHS, as well as the Quality Workgroup that AHIC created in August 2006 to specify how IT could capture, aggregate, and report inpatient and outpatient quality data. Through its representation in AHIC and the Quality Workgroup, CMS has participated in decisions about the specific focus areas to be examined through contracts with nongovernmental entities. These contracts currently address the use of health IT for a range of purposes, which may also include quality data collection and submission in the near future. However, HHS has identified no detailed plans, milestones, or time frames for either its broad effort to encourage IT in health care nationwide or its specific objective to promote the use of health IT for quality data collection. Over the past several years, CMS sponsored two studies to examine the current and potential capacity of hospital IT systems to facilitate quality data collection and submission. These studies identified challenges to using existing hospital IT systems for quality data collection and submission, including gaps and inconsistencies in applicable data standards, as well as in the content of clinical information recorded in existing IT systems. Data standards create a uniform vocabulary for electronically recorded information by providing common definitions and coding conventions for a specified set of medical terms. Currently, an array of different standards apply to different aspects of patient care, including drug ordering, digital imaging, clinical laboratory results, and overall clinical terminology relating to anatomy, problems, and procedures. The studies also found that existing IT systems did not record much of the specific clinical information needed to determine the appropriate data element values that hospitals submit to CMS. To achieve CMS’s goal of enabling hospitals to transmit quality data directly from their own IT systems to CMS’s nationwide clinical database, the sets of data in the two systems should conform to a common set of data standards and capture all the data necessary for quality measures. A key element in the effort to create this congruence is the further development and implementation of data standards. In the first study, completed in March 2005, CMS contracted with the Colorado Foundation for Medical Care to test the potential for directly downloading values for data elements for CMS’s hospital quality measures using patient data from electronic medical records in three hospitals and one hospital system. The study found that numerous factors impeded this process under current conditions, including the lack of certain key types of information in the hospitals’ IT systems, such as emergency department data, prearrival data, transfer information, and information on medication contraindications. The study also noted that hospitals differed in how they coded their data, and that even when they had implemented data standards, the hospitals had used different versions of the standards or applied them in different ways. For example, the study found wide variation in the way that the hospitals recorded drug names and laboratory results in their IT systems, as none of the hospitals had implemented the existing data standards in those areas. In the second study, which was conducted by the Iowa Foundation for Medical Care and completed in February 2006, CMS examined the potential to expand its current data specifications for heart attack, heart failure, pneumonia, and surgical measures to incorporate the standards adopted by the federal Consolidated Healthcare Informatics (CHI) initiative. Unlike the first study, which focused on actual patient data in existing IT systems, this study focused on the relationship of current data standards to the data specifications for CMS’s quality data. It found that there were inconsistencies in the way that corresponding data elements were defined in the CMS/Joint Commission Specifications Manual and in the CHI standards that precluded applying those standards to all of CMS’s data elements. Moreover, it found that some of the data elements are not addressed in the CHI standards. These results suggested to CMS officials that the data standards needed to undergo further development before they could support greater use of health IT to facilitate quality data collection and submission. CMS has joined efforts by HHS to promote greater use of health IT in general and, more recently, in facilitating the use of health IT for quality data collection and submission. The overall goal of HHS’s efforts in this area, working through AHIC and ONC, is to encourage the development and nationwide implementation of interoperable health IT in both the public and the private sectors. To guide those efforts, ONC has developed a strategic framework that outlines its goals, objectives, and high-level strategies. One of the 14 objectives involves the collection of quality information. CMS, through its participation in AHIC, has taken part in the selection of specific focus areas for ONC to pursue in its initial activities to promote health IT. Those activities have largely taken place through a series of contracts with a number of nongovernmental entities. ONC has sought through these contracts to address issues affecting wider use of health IT, including standards harmonization, the certification of IT systems, and the development of a Nationwide Health Information Network. For example, the initial work on standards harmonization, conducted under contract to ONC by the Healthcare Information Technology Standards Panel (HITSP), focused on three targeted areas: biosurveillance, sharing laboratory results across institutions, and patient registration and medication history. Meanwhile, the Certification Commission for Health Information Technology (CCHIT) has worked under a separate contract with ONC to develop and apply certification criteria for electronic health record products used in physician offices, with some initial work on certification of electronic health record products for inpatient care as well. CMS is also represented on the Quality Workgroup that AHIC created in August 2006 as a first step in promoting the use of health IT for quality data collection and submission. One of seven workgroups appointed by AHIC, the Quality Workgroup received a specific charge to specify how health IT should capture, aggregate, and report inpatient as well as outpatient quality data. It plans to address this charge by adding activities related to using IT for quality data collection to the work performed by HITSP and CCHIT addressing other objectives under their ongoing ONC contracts. Members of the Quality Workgroup, along with AHIC itself, have recently begun to consider the specific focus areas to include in the directions given to HITSP and CCHIT for their activities during the coming year. Early discussions among AHIC members indicated that they would try to select focus areas that built on the work already completed by ONC’s contractors and that targeted specific improvements in quality data collection that could also support other priorities for IT development that AHIC had identified. The focus areas that AHIC selects will, over time, influence the decisions that HHS makes regarding the resources it will allocate and the specific steps it will take to overcome the limitations of existing IT systems for quality data collection and submission. In a previous report and subsequent testimony, we noted that ONC’s overall approach lacked detailed plans and milestones to ensure that the goals articulated in its strategic framework were met. We pointed out that without setting milestones and tracking progress toward completing them, HHS cannot tell if the necessary steps are in place to provide the building blocks for achieving its overall objectives. HHS concurred with our recommendation that it establish detailed plans and milestones for each phase of its health IT strategic framework, but it has not yet released any such plans, milestones, or a time frame for completion. Moreover, HHS has not announced any detailed plans or milestones or a time frame relating to the efforts of the Quality Workgroup to promote the use of health IT to capture, aggregate, and report inpatient and outpatient quality data. Without such plans, it will be difficult to assess how much the focus areas AHIC selects in the near term on its contracted activities will contribute to enabling the Quality Workgroup to fulfill its charge in a timely way. There is widespread agreement on the importance of hospital quality data. The Congress made the APU program permanent to provide a financial incentive for hospitals to submit quality data to CMS and directed the Secretary of HHS to increase the number of measures for which hospitals would have to provide data. In addition, the hospitals we visited reported finding value in the quality data they collected and submitted to CMS to improve care. Collecting quality data is a complex and labor-intensive process. Hospital officials told us that as the number of quality measures required by CMS increased, the number of clinically trained staff required to collect and submit quality data increased proportionately. They also told us that increased use of IT facilitates the collection and submission of quality data and thereby lessens the demand for greater staff resources. The degree to which existing IT systems can facilitate data collection is, however, constrained by limitations such as the prevalence of data recorded as unstructured narrative or text. Overcoming these limitations would enhance the potential of IT systems to ease the demand on hospital resources. Promoting the use of health IT for quality data collection is 1 of 14 objectives that HHS has identified in its broader effort to encourage the development and nationwide implementation of interoperable IT in health care. The extent to which HHS can overcome the limitations of existing IT systems and make progress on this objective will depend in part on where this objective falls on the list of priorities for the broader effort. To date, HHS has identified no detailed plans, milestones, or time frames for either the broad effort or the specific objective on promoting the use of health IT for collecting quality data. Without such plans, HHS cannot track its progress in promoting the use of health IT for collecting quality data, making it less likely that HHS will achieve that objective in a timely way. Our analysis indicates that unless activities to facilitate greater use of IT for quality data collection and submission proceed promptly, hospitals may have difficulty collecting and submitting quality data required for an expanded APU program. To support the expansion of quality measures for the APU program, we recommend that the Secretary of HHS take the following actions: identify the specific steps that the department plans to take to promote the use of health IT for the collection and submission of data for CMS’s hospital quality measures; and inform interested parties about those steps and the expected time frame, including milestones for completing them. In commenting on a draft of this report on behalf of HHS, CMS expressed its appreciation of our thorough analysis of the processes that hospitals use to report quality data and the role that IT systems can play in that reporting, and it concurred with our two recommendations. (CMS’s comments appear in app. V.) With respect to the recommendations, CMS stated that it will continue to participate in relevant HHS studies and workgroups, and, as appropriate, it will inform interested parties regarding progress in the implementation of health IT for the collection and submission of hospital quality data as specific steps, including time frames and milestones, are identified. In addition, as health IT is implemented, CMS anticipates that a formal plan will be developed that includes training for providers in the use of health IT for reporting quality data. CMS also provided technical comments that we incorporated where appropriate. CMS made two additional comments relating to the information provided on our case study hospitals and our discussion of patients excluded from the hospital performance assessments. CMS suggested that we describe the level of health IT adoption in the case study hospitals in table 1 of appendix III; this information was already provided in table 4 of appendix III. CMS suggested that we highlight the application of patient exclusions in adapting health IT for quality data collection and submission. We chose not to because our analysis showed that the degree of challenge depended on the nature of the information required for a given data element. Exclusions based on billing data, such as discharge status, pose much less difficulty than other exclusions, such as checking for contraindications to ACEIs and ARBs for LVSD, which require a wide range of clinical information. CMS noted that the AHIC Quality Workgroup had presented its initial set of recommendations at AHIC’s most recent meeting on March 13, 2007, and provided a copy of those recommendations as an appendix to its comments. The agency characterized these recommendations as first steps, with initial timelines, to address the complex issues that affect implementation of health IT for quality data collection and submission. Specifically with reference to collecting quality data from hospitals as well as physicians, the Quality Workgroup recommended the appointment of an expert panel that would designate a set of quality measures to have priority for standardization of their data elements, which, in turn, would enable automation of their collection and submission using electronic health records and health information exchange. The first recommendations from the expert panel are due June 5, 2007. The work of the expert panel is intended to guide subsequent efforts by HITSP to fill identified gaps in related data standards and by CCHIT to develop criteria for certifying electronic health record products. In addition, the Quality Workgroup recommended that CMS and the Agency for Healthcare Research and Quality (AHRQ) both work to bring together the developers of health quality measures and health IT vendors, so that development of future health IT systems would take greater account of the data requirements of emerging quality measures. AHIC approved these recommendations from the Quality Workgroup at its March 13 meeting. We also sent to each of the eight case study hospitals sections from the appendixes pertaining to that hospital. We asked each hospital to check that the section accurately described its processes for collecting and submitting quality data as well as related information on its characteristics and resources. Officials from four of the eight hospitals responded and provided technical comments that we incorporated where appropriate. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to the Secretary of HHS, the Administrator of CMS, and other interested parties. We will also make copies available to others on request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7101 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. Angiotensin-converting enzyme inhibitor or angiotensin receptor blocker for left ventricular systolic dysfunctionBeta blocker at hospital arrivalBeta blocker prescribed at dischargeThrombolytic agent received within 30 minutes of hospital arrival Percutaneous coronary intervention received within 120 minutes of hospital arrival Angiotensin-converting enzyme inhibitor or angiotensin receptor blocker for left ventricular systolic dysfunctionInitial antibiotic received within 4 hours of hospital arrivalBlood culture performed before first antibiotic received in hospital Prophylactic antibiotic received within 1 hour prior to surgical incision Prophylactic antibiotics discontinued within 24 hours after surgery end time uality measures. Appendix II: Da Hospital Performance on a Heart Attack Quality Measure ta Elements Used to Calculate Admission rce? Trfer from nother emergency deprtment? Compte dtion of hopity in d from difference te” nd “Dichrge Dte” elementus? Contrindiction to et locker on rrivl? Bet locker received within 24 ho fter hopitrrivl? Ner of ptient who received et locker Ner of ptient for whom et locker was pproprite Included codes consist of eight different values for admission source that represent patients who were admitted from any source other than those listed in footnote b, including physician referral, skilled nursing facility, and the hospital’s emergency room. Excluded codes consist of three different values for admission source that represent patients who were transferred to this hospital from another acute care hospital, from a critical access hospital, or within the same hospital with a separate claim. different values for discharge status that represent patients who were discharged to any setting other than those listed in footnote e, including home care, skilled nursing facility, and hospice. Private insurer uality data. The Leapfrog Group is a consortium of large private and public health care purchasers that publicly recognizes hospitals that have implemented certain specific uality and safety practices, such as computerized physician order entry. The projected reduction in fiscal year 2006 and fiscal year 2007 Medicare payments (rounded to the nearest $1,000) represents the amount that the hospital’s revenue from Medicare would have decreased for that fiscal year had the hospital not submitted uality data under the Annual Payment Update program. These estimates are based on information on the number and case mix of Medicare patients served by these hospitals during the previous period. This is the information that was available to hospital administrators from CMS at the beginning of the fiscal year. The actual reduction would ultimately depend on the number and case mix of the Medicare patients that the hospital actually treated during the course of that fiscal year. The projected reduction for fiscal year 2007 was substantially larger because that was the first year in which the higher rate of reduction mandated by the Deficit Reduction Act of 2005—from 0.4 percentage points to 2.0 percentage points—took effect. Abstractor starts searching through paper records, then looks for additional information in electronic records (e.g., for echocardiogram results) Hospital copies, checks completeness of, and ships requested patient records uarter if the number of eligible patients met a certain threshold. Otherwise, the hospital was reuired to abstract uality data for all patients who met the inclusion criteria for any one of the four conditions. Hospitals could also choose not to sample, even if it were permitted under the CMS sampling procedures. Licensed practical nurse (LPN) Medical records coder and RN with physician support 20 minutes (average) None beyond reviews by CMS contractor uarters of discharges from April 2005 through March 2006. uarter of discharges from January through March 2006. To examine how hospitals collect and submit quality data, and to determine the extent to which information technology (IT) facilitates those processes, we conducted case studies of eight individual acute care hospitals that collect and submit quality data to the Centers for Medicare & Medicaid Services (CMS). We chose this approach to obtain an in-depth understanding of these processes as they are currently experienced at the hospital level. For background information on the requirements that the hospitals had to satisfy, we reviewed CMS documents relevant to the Annual Payment Update (APU) program. In particular, we examined multiple revisions of the Specifications Manual for National Hospital Quality Measures, which is issued jointly by CMS and the Joint Commission (formerly the Joint Commission on Accreditation of Healthcare Organizations). We structured our selection of hospitals for the eight case studies to provide a contrast of hospitals with highly sophisticated IT systems and hospitals with an average level of IT capability. We excluded critical access hospitals from this selection process because they are not included in the APU program. The selected hospitals varied on several hospital characteristics, including urban/rural location, size, teaching status, and membership in a system that linked multiple hospitals through shared ownership or other formal arrangements. (See app. III, table 1.) To select four hospitals with highly sophisticated IT systems, we relied on recommendations from interviews with a number of experts in the field of health IT, as well as on a recent review of the research literature on the costs and benefits of health IT and other published articles. Three of the four hospitals we chose were among those where much of the published research has taken place. They were all early adopters of health IT, and each had implemented internally developed IT systems. The fourth hospital had more recently acquired and adapted a commercially developed system. This hospital was distinguished by the extent to which it had replaced its paper medical records with an integrated system of electronic patient records. Each of these four case study hospitals was located in a different metropolitan area. We selected the four hospitals with less sophisticated IT systems from the geographic vicinity of the four hospitals already chosen, thus providing two case study hospitals from each of four metropolitan areas. We decided that one should be a rural hospital, using the Medicare definition of rural, which is located outside of a Metropolitan Statistical Area (MSA). To determine from which of the four metropolitan areas we should select a neighboring rural hospital, we analyzed data on Medicare-approved hospitals drawn from CMS’s Provider of Services (POS) file. We identified the rural hospitals located within 150 miles of each of the first four hospitals. From among those four sets of rural hospitals, we chose the set with the largest number of acute care hospitals as the set from which to choose our rural case study hospital. For each of the remaining three metropolitan areas, we used the hospitals listed in the POS file as short- term acute care hospitals located in the same MSAs as the three sets from which to choose our remaining three hospitals. We excluded hospitals located in a different state from the first hospital selected for that metropolitan area, so that all of the hospitals under consideration for that area would come under the jurisdiction of the same Quality Improvement Organization (QIO). To select the second case study hospital from among those available in or near each of the four metropolitan areas, we applied a procedure designed to produce a straightforward and unbiased selection. We began by recording the total number of cases for which each of these hospitals had reported results on CMS’s Web site for heart attack, heart failure, and pneumonia quality measures. We obtained this information from the Web site itself, running reports for each hospital that showed, for each quality measure, the number of cases that the hospital’s quality performance score was based on. Since some quality measures apply only to certain patients, we recorded the largest number of cases listed for any of the quality measures reported for a given condition. Next we summed the cases for the three conditions and rank ordered the hospitals in each of the three MSAs, and the rural hospitals in the fourth metropolitan area, from most to least total cases submitted. We then made a preliminary selection by taking the hospital with the median value in each of those lists. By selecting the hospital with the median number of cases reported, we attempted to minimize the chances of picking a hospital that would represent an outlier compared to other hospitals in the selection pool. Before selecting the final four case study hospitals, we checked to make sure that the hospitals did not happen to have an unusually high level of IT capabilities with respect to electronic patient records. To do this, we contacted each of the selected hospitals and obtained a description of its current IT systems. We compared this description to the stages of electronic medical record implementation laid out by the Healthcare Information and Management Systems Society (HIMSS). The HIMSS model identifies eight stages based on the scope and sophistication of clinical functions implemented through a hospital’s system of electronic medical records. According to HIMSS, the large majority of hospitals in the United States are at the lower three stages. Based on the descriptions of these stages, we determined that none of the prospectively selected hospitals had IT systems that exceeded the third stage. We collected information about the processes used to collect and submit quality data from each of the eight case study hospitals through on-site interviews with hospital abstractors, quality managers, IT staff, and hospital administrators. We told these officials that neither they personally nor their hospitals would be identified by name in our report. The site visits took place between mid-July and early September 2006 and ranged in duration from 3 to 8 hours. Our data collection at each hospital was guided by a protocol that specified a series of topics to cover in our interviews. These topics included a description of the processes used at each hospital and the financial and staff resources devoted to quality data collection and submission. We pretested the protocol at two hospitals not included in our set of eight case study hospitals. As part of the protocol, we asked abstractors at each hospital to explain in detail how they found the information needed to determine the appropriate values for each of the data elements required for two specific quality measures: (1) angiotensin-converting enzyme inhibitor (ACEI) or antiotensin receptor blocker (ARB) for left ventricular systolic dysfunction (LVSD) for heart failure patients and (2) initial antibiotic received within 4 hours of hospital arrival for pneumonia patients. We selected these measures because they covered a number of different types of data elements, including those involving administration of medications, determining contraindications, date and time variables, and making clinical assessments such as whether a patient had LVSD. To determine the extent to which IT facilitated these processes at the eight case study hospitals, we included several topics on IT systems in our site visit protocol. We asked about any IT systems used by the abstractors in locating relevant clinical information in patient medical records and the specific advantages and limitations they encountered in using those systems. We also asked hospital officials to assess the potential for IT systems to provide higher levels of assistance for quality data collection and submission over time. If separate IT staff were involved in the hospital’s quality data collection and submission process, we included them in the interviews. Where possible, we supplemented the information provided through interviews with direct observation of the processes used by hospitals to collect and submit quality data. We asked the case study hospitals to show us how they performed these processes, and five of the eight hospitals arranged for us to observe the collection of quality data for all or part of a patient record. We observed abstractors accessing clinical information from both paper and electronic records. We also obtained pertinent information about the case study hospitals from CMS documents and contractors. The estimated amount of dollars that the case study hospitals would have lost had they not submitted quality data to CMS, presented in appendix III, table 1, was calculated from data provided in documents made available to all hospitals at the start of each of the fiscal years. Information on the average number of patient charts abstracted quarterly by each case study hospital, shown in appendix III, table 3, was drawn from a table showing the number of patients for whom quality data were submitted to CMS’s clinical data warehouse. We obtained that table from the Iowa Foundation for Medical Care (IFMC), which is the CMS contractor that operates the clinical data warehouse. The IFMC table provided this information for all hospitals submitting quality data for discharges that occurred from April 2005 through March 2006. These were the most recent data available. The evidence that we obtained from our eight case study hospitals is specific to those hospitals. In particular, it does not offer a basis for relating any differences we observed among these individual hospitals to their differences on specific dimensions, such as size or teaching status. Nor can we generalize from the group of eight as a whole to acute care hospitals across the country. Furthermore, although we examined the processes hospitals used to collect and submit quality data and the role that IT plays in that process, we did not examine general IT adoption in the hospital industry. To obtain information on whether CMS has taken steps to promote the development of IT systems to facilitate quality data collection and submission, we interviewed CMS officials as well as CMS contractors and reviewed documents including reports on related studies funded by CMS. We also interviewed officials at the Office of the National Coordinator for Health Information Technology (ONC) regarding the plans and activities of the American Health Information Community (AHIC) quality workgroup. In addition, we downloaded relevant documents from the AHIC Web site, including meeting agendas, prepared presentations, and meeting minutes for both AHIC as a whole and its Quality Workgroup. We conducted our work from February 2006 to April 2007 in accordance with generally accepted government auditing standards. In addition to the contact named above, Linda T. Kohn, Assistant Director; Mohammad S. Khan; Eric A. Peterson; Roseanne Price; Jessica C. Smith; and Teresa F. Tucker made key contributions to this report.
Hospitals submit data in electronic form on a series of quality measures to the Centers for Medicare & Medicaid Services (CMS) and receive scores on their performance. Increasingly, the clinical information from which hospitals derive the quality data for CMS is stored in information technology (IT) systems. GAO was asked to examine (1) hospital processes to collect and submit quality data, (2) the extent to which IT facilitates hospitals' collection and submission of quality data, and (3) whether CMS has taken steps to promote the use of IT systems to facilitate the collection and submission of hospital quality data. GAO addressed these issues by conducting case studies of eight hospitals with varying levels of IT development and interviewing relevant officials at CMS and the Department of Health and Human Services (HHS). The eight case study hospitals used six steps to collect and submit quality data: (1) identify the patients, (2) locate information in their medical records, (3) determine appropriate values for the data elements, (4) transmit the quality data to CMS, (5) ensure that the quality data have been accepted by CMS, and (6) supply copies of selected medical records to CMS to validate the data. Several factors account for the complexity of abstracting all relevant information in a patient's medical record, including the content and organization of the medical record, the scope of information and the clinical judgment required for the data elements, and frequent changes by CMS in its data specifications. Due in part to these complexities, most of the case study hospitals relied on clinical staff to abstract the quality data. Increases in the number of quality measures required by CMS led to increased demands on clinical staff resources. Offsetting the demands placed on clinical staff were the benefits that case study hospitals reported finding in the quality data, such as providing feedback to clinicians and reports to hospital administrators. GAO's case studies showed that existing IT systems can help hospitals gather some quality data but are far from enabling hospitals to automate the abstraction process. IT systems helped hospital staff to abstract information from patients' medical records, in particular by improving accessibility to and legibility of the medical record. The limitations reported by officials in the case study hospitals included having a mix of paper and electronic records, which required staff to check multiple places to get the needed information; the prevalence of data recorded as unstructured narrative or text, which made locating the information time-consuming because it was not in a prescribed place in the record; and the inability of some IT systems to access related data stored in another IT system in the same hospital, which required staff to access each IT system separately to obtain related pieces of information. Hospital officials expected the scope and functionality of their IT systems to increase over time, but this process will occur over a period of years. CMS has sponsored studies and joined HHS initiatives to examine and promote the current and potential use of hospital IT systems to facilitate the collection and submission of quality data, but HHS lacks detailed plans, including milestones and a time frame against which to track its progress. CMS has joined efforts by HHS to promote the use of IT in health care, including a Quality Workgroup charged with specifying how IT could capture, aggregate, and report inpatient and outpatient quality data. HHS plans to expand the use of health IT for quality data collection and submission through contracts with nongovernmental entities that currently address the use of health IT for a range of other purposes. However, HHS has identified no detailed plans, milestones, or time frames for either its broad effort to encourage IT in health care nationwide or its specific objective to promote the use of health IT for quality data collection.
In 1998, we reported that difficulties in comparing EPA’s fiscal year 1999 and 1998 budget justifications arose because the 1999 budget justification was organized according to the agency’s strategic goals and objectives, whereas the 1998 justification was organized according to EPA’s program offices and components. Funds for EPA’s Science and Technology account were requested throughout the fiscal year 1999 budget justification for all 10 of the agency’s strategic goals and for 25 of its 45 strategic objectives. As shown in table 1, two strategic goals—Sound Science and Clean Air—accounted for 71 percent of the funds requested for Science and Technology. In its fiscal year 1999 budget justification, EPA did not show how the funds requested for each goal and objective would be allocated among its program offices or components. To be able to compare EPA’s requested fiscal year 1999 funds for Science and Technology to the previous fiscal year’s enacted funds, EPA would have had to maintain financial records in two different formats—by program components and by strategic goals and objectives—and to develop crosswalks to link information between the two. EPA maintained these two formats for some of the Science and Technology funds but not for others. Guidance from the Office of Management and Budget (OMB) does not require agencies to develop or provide crosswalks in their justifications when a budget format changes. However, OMB examiners or congressional committee staff may request crosswalks during their analyses of a budget request. Two of EPA’s program offices—Research and Development and Air and Radiation—accounted for over 97 percent of the Science and Technology funds that were requested for fiscal year 1999. The offices maintained their financial records differently. The Office of Research and Development maintained the enacted budget for fiscal year 1998 by program components (the old format) and also by EPA’s strategic goals and objectives (the new format). With these two formats of financial data, the Office of Research and Development could readily crosswalk, or provide links, to help compare the 1998 enacted funds, organized by program components, to the fiscal year 1999 budget justification, organized according to EPA’s strategic goals and objectives. In contrast, the Office of Air and Radiation maintained its financial records for fiscal year 1998 under EPA’s new strategic goals and objectives format but did not also maintain this information under the old format. Therefore, the Office of Air and Radiation could only estimate how the fiscal year 1998 enacted funds would have been allocated under the old format. For example, EPA estimated that the Office of Air and Radiation’s program component for radiation had an enacted fiscal year 1998 budget of $4.6 million. While the activities of this program component continued in fiscal year 1999, they were subsumed in the presentation of the budget for EPA’s strategic goals and objectives. Therefore, because the radiation program could not be readily identified in the fiscal year 1999 budget justification, congressional decisionmakers could not easily compare funds for it with the amount that had been enacted for fiscal year 1998. At our request, the Office of Air and Radiation estimated its enacted budget for fiscal year 1998 by program components and then developed a crosswalk to link those amounts with EPA’s strategic goals and objectives. The remaining 3 percent of the requested funds for Science and Technology is administered by the Office of Water; the Office of Administration and Resources Management; the Office of Prevention, Pesticides, and Toxic Substances; and the Office of Enforcement and Compliance Assurance. Two of these offices—the Office of Prevention, Pesticides, and Toxic Substances and the Office of Enforcement and Compliance Assurance—did not format financial information by program components. These offices estimated how the 1998 enacted funds would be classified under their various program components. For fiscal year 2000, EPA made several changes to improve the clarity of its budget justification. According to EPA officials, they planned to provide tables for each goal and objective to show the amounts of funds requested for key programs, starting with the agency’s fiscal year 2000 budget justification. The justification for fiscal year 2000 does contain additional information, in the form of tables for each objective, that details some of the requested amounts by key programs. For example, under the objective Research for Human Health Risk, part of the Sound Science goal, the $56 million requested for the objective is divided into two key programs: Human Health Research and Endocrine Disruptor Research. According to EPA officials, they did not plan to identify in the fiscal year 2000 budget justification the program offices that would be administering the requested funds. However, they intended to make available backup information to show the program offices that would be administering the requested funds. Such information is available for the fiscal year 2000 budget request and was provided to this Committee. According to EPA officials and an EPA draft policy on budget execution, the agency’s Planning, Budgeting, Analysis, and Accountability System would record budget data by goals, objectives, subobjectives, program offices, and program components. EPA expected that this system would be fully implemented on October 1, 1998. According to EPA officials, the new Planning, Budgeting, Analysis, and Accountability System was implemented on this date; accordingly, EPA can provide information showing how the agency’s requested funds would be allocated according to any combination of goals, objectives, subobjectives, program offices, and key programs. EPA also planned to submit future budget justifications in the format of its strategic goals and objectives, as it had done for fiscal year 1999. That way, the formats for fiscal year 2000 and beyond would have been similar to those for the fiscal year 1999 justification, facilitating comparisons in future years. According to EPA officials, the strategic goals and objectives in EPA’s fiscal year 2000 justification for Science and Technology would be the same as those in its fiscal year 1999 justification. However, beginning in fiscal year 1999, the agency has begun to reassess its strategic goals and objectives, as required by the Government Performance and Results Act. This assessment was meant to involve EPA’s working with state governments, tribal organizations, and congressional committees to evaluate its goals and objectives to determine if any of them should be modified. Upon completion of this assessment, if any of EPA’s goals or objectives change, the structure of the agency’s budget justification would change correspondingly. Changes to the strategic goals and objectives in the budget justifications could also require crosswalks and additional information to enable consistent year-to-year comparisons. EPA did maintain, as planned, the strategic goals and objectives format for its fiscal year 2000 budget justification. However, for the objectives that rely on Science and Technology funds, EPA made several changes without explanations or documentation to link the changes to the fiscal year 1999 budget justification. EPA (1) acknowledged that funds from one objective were allocated to several other objectives but did not identify the objectives or amounts, (2) did not identify funds in Science and Technology amounts that were transferred from Hazardous Substances Superfund, and (3) made other changes to the number or wording of objectives that rely on Science and Technology funds. In the fiscal year 1999 budget justification, under the strategic goal Sound Science, Improved Understanding of Environmental Risk, and Greater Innovation to Address Environmental Problems, EPA requested $86.6 million for the fifth objective: Enable Research on Innovative Approaches to Current and Future Environmental Problems; and the 1998 fiscal year enacted amount was listed as $85.0 million. In the fiscal year 2000 budget justification, EPA marked this objective as “Not in Use.” The justification stated that the fiscal year 1999 request included the amounts for operating expenses and working capital for the Office of Research and Development under the same objective in the Sound Science goal. In the fiscal year 2000 budget justification, EPA allocated the amounts requested for this objective among the other goals and objectives to more properly reflect costs of the agency’s objectives. However, the fiscal year 2000 justification did not identify the specific objectives for either the $85.0 million enacted for fiscal year 1998 nor the $86.6 million requested for fiscal year 1999. The allocation of funds was not specifically identified in the justification because EPA does not prepare crosswalks unless asked to by OMB or congressional committees. Therefore, a clear comparison of 1999 and 2000 budget justifications cannot be made. Another aspect that made year-to-year comparisons difficult was EPA’s treatment of funds transferred to Science and Technology from the agency’s Superfund account. In the fiscal year 2000 justification, the Science and Technology amounts shown as enacted for fiscal year 1999 include $40 million transferred from the Hazardous Substances Superfund. In contrast, the requested amounts for fiscal year 2000 do not include the transfer from the Superfund. As a result, amounts enacted for fiscal year 1999 cannot be accurately compared to the amounts requested for fiscal year 2000. This discrepancy is particularly evident in the objective Reduce or Control Risks to Human Health, under the goal Better Waste Management, Restoration of Contaminated Waste Sites, and Emergency Response. The amounts for Science and Technology as shown in the budget justification for the objective are shown in table 2. The $49.8 million shown as enacted for fiscal year 1999 includes a significant amount of the $40 million transferred from the Superfund account, according to an EPA official. However, because the specific amount is not shown, an objective-by-objective comparison of the Science and Technology budget authority for fiscal years 1999 and 2000 cannot be accurately made, and it appears that EPA is requesting a significant decrease for this objective. An EPA official stated that the $40 million was not separately identified because the congressional guidance on transferring the funds did not specifically state which objectives these funds were to support. In the fiscal year 1999 budget justification, the strategic goal Better Waste Management, Restoration of Contaminated Waste Sites, and Emergency Response had three objectives: (1) Reduce or Control Risks to Human Health, (2) Prevent Releases by Proper Facility Management, and (3) Respond to All Known Emergencies. In the fiscal year 1999 budget request, EPA indicated $6.3 million was enacted for Prevent Releases by Proper Facility Management in fiscal year 1998 and requested $6.6 million for fiscal year 1999. EPA indicated $1.6 million was enacted for Respond to All Known Emergencies in fiscal year 1998 and requested $1.6 for fiscal year 1999. The fiscal year 2000 budget justification omits these two—the second and third objectives and does not indicate where the funds previously directed to those objectives appear. Therefore, a clear comparison of budget requests year to year cannot be made. In the fiscal year 2000 budget justification, EPA added the second objective—Prevent, Reduce and Respond to Releases, Spills, Accidents, and Emergencies—to the strategic goal Better Waste Management, Restoration of Contaminated Waste Sites, and Emergency Response. EPA indicated that $8.8 million had been enacted for this objective in fiscal year 1999 and requested $9.4 million for this objective for fiscal year 2000. EPA did not identify which objectives in the fiscal year 1999 budget included the enacted $8.8 million and therefore a comparison to the prior budget justification was difficult. The other changes to the objectives were made as a result of the program offices’ reassessment of and modifications to subobjectives, which in turn led to changes in the agency’s objectives. While we do not question EPA’s revisions of its goals or objectives, the absence of a crosswalk or explanation does not enable a clear comparison of budget requests year to year. Mr. Chairman, this concludes my prepared statement. I will be pleased to respond to any questions that you or the Members of the Subcommittee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed the Environmental Protection Agency's (EPA) budget justification for its Science and Technology account, and changes among the justifications for fiscal years (FY) 1998, 1999, and 2000, focusing on: (1) difficulties experienced in comparing EPA's Science and Technology budget justification for FY 1999 with those of previous years; and (2) actions that EPA planned and implemented in order to improve the clarity and comparability of the FY 2000 justification, and items that need further clarification. GAO noted that: (1) EPA's budget justification for FY 1999 could not be readily compared to amounts requested or enacted for FY 1998 and prior years because the justification did not show how the budget would be distributed among program offices or program components--information needed to link to the prior years' justifications; (2) the Office of Management and Budget does not require EPA to provide information to compare the justifications when the format changes; (3) to facilitate such comparisons, agency officials provided supplemental information to congressional committees; (4) because EPA did not maintain financial records by both program components and strategic goals and objectives for all enacted Science and Technology funds for FY 1998, it could not readily provide information for all amounts; (5) at GAO's request, EPA estimated the 1998 enacted amounts so that the 1998 budget could be compared with the FY 1999 request; (6) EPA implemented several changes to its FY 2000 justification to solve problems experienced in comparing the 1998 and 1999 budget justifications; (7) to improve the clarity of its budget justification for FY 2000, EPA included tables that detail, for each objective, how requested amounts are allocated among key programs; (8) backup information is also available that shows the program offices that will be administering the requested funds; (9) the agency also implemented a new accounting system that records budget data by goals and objectives, which enhances reporting financial data by goals and objectives; (10) while the budget justification followed the basic format reflecting the agency's strategic goals and objectives, EPA made changes to the objectives without explanations or documentation to link the changes to the FY 1999 budget justification; (11) for example, funds were allocated from one objective to other objectives without identifying the objectives or amounts, funds that included money transferred from another account were shown as Science and Technology funds, and changes were made to the number or wording of objectives without explanations; and (12) as a result, the FY 2000 budget justification cannot be completely compared with the FY 1999 justification without supplemental information.
Since NASA was established in 1958, its civil service workforce has fluctuated widely. In 1967, during the Apollo program, the workforce was at about 35,900. In the 1970s, due to unfunded programs, the workforce shrank, with several thousand employees involuntarily separated during the middle of the decade. By 1980, the workforce had stabilized near 21,000. It remained close to that level until 1986, when the space shuttle Challenger accident forced a reexamination of NASA. In the mid- and late 1980s, NASA began some ambitious new programs and its workforce began to grow again in the latter part of the decade and into the early 1990s—peaking in 1992 at more than 25,000. When the current administration took office in 1993, it initiated steps to reduce the size of the overall federal workforce. An executive order in February 1993 directed that the workforce be reduced by 4 percent (100,000 employees) by the end of fiscal year 1995. Then, in September 1993, the National Performance Review (NPR) recommended a reduction of 252,000 federal employees by 1999. By the time Congress passed the Federal Workforce Restructuring Act in March 1994, which legislated an overall reduction of 272,900 federal employees by 1999, NASA was already cutting its workforce, which was more than 24,000 in fiscal year 1993, in response to the executive order and the NPR recommendation. NASA currently plans to achieve an FTE level of about 17,500 employees by fiscal year 2000, an overall reduction of about 8,000 from its previously planned level for that year. “As Administrator, I have decided not to take any precipitous action in FY 1996 to work toward these figures because to do so would involve a major disruption to our employees. It would not be fair to put them through this process to reach projections that are not hard and fast.” Through fiscal year 1995, NASA reduced its previously planned fiscal year 2000 FTE goal by over 3,000 FTEs, and it was planning to increase the aggregate reduction to about 4,000 FTEs in 1996. As shown in table 1, NASA had just over 24,700 FTE personnel in fiscal year 1993. This number dropped below 23,100 in fiscal year 1995, and it is expected to decrease to about 21,500 in fiscal year 1996. A key feature of the Federal Workforce Restructuring Act of 1994 was the authorization for agencies to pay up to $25,000 to separating workers—a buyout. Initially, NASA planned to offer this buyout to no more than 825 personnel. However, after nearly 2,000 employees indicated interest, NASA decided to offer 1,252 buyouts in 1994. This buyout was accepted by 1,178 employees. The buyout allocations focused on Headquarters, Marshall Space Flight Center, Lewis Research Center, and Kennedy Space Center—the installations most affected by the space station’s redesign and program management restructuring. No occupational categories were targeted in the 1994 buyout, but members of the Senior Executive Service, attorneys at Kennedy Space Center and Marshall Space Flight Center, and astronauts were not permitted buyouts, in part, because NASA felt that critical skills would be lost if these employees separated. After the 1994 buyout, NASA was confronted with an even larger downsizing challenge when the President’s fiscal year 1996 budget request reduced NASA’s budgets through fiscal year 2000 by $4.6 billion. NASA announced its intention to cover this reduction by cutting its infrastructure, including personnel, rather than canceling or cutting back program initiatives. The NASA Administrator tasked the agency to conduct a zero base review (ZBR), which included examining every civil service and support contractor position in NASA to find and eliminate overlap and over staffing. One of the review’s conclusions was that NASA’s civil service workforce could be reduced to about 17,500 by the end of the decade without eliminating core programs. In anticipation of lower numbers of personnel, NASA offered another buyout in 1995. All employees were eligible and it was accepted by 1,482 employees. The 2,660 buyouts represented about 66 percent of the more than 4,000 employees who left NASA during fiscal years 1994 and 1995, as shown in table 2. NASA’s scientists and engineers had the largest reductions in numbers, but the smallest proportionate reductions, as shown in table 3. Consequently, as of September 30, 1995, scientists and engineers made up almost 58 percent of NASA’s FTP employees—slightly higher than a few years ago when they were about 56 percent of NASA’s workforce. NASA personnel managers consider the two buyouts a success. Given the rate of employee turnover experienced in the 2 years preceding the buyouts, they estimate that as many as 2,000 workers left the agency sooner than they would have without a buyout. As previously noted, buyouts accounted for about two-thirds of the employees leaving NASA in fiscal years 1994 and 1995. However, the buyout authority has expired. Without buyout authority, NASA personnel projections as of March 1996 showed that voluntary retirements and other separations should enable the agency to continue to meet its downsizing goals through fiscal year 1998, but attrition would not be sufficient in fiscal year 1999 to meet the proposed budgets of about half of NASA’s centers or for the agency as a whole. As a result, NASA personnel officials said a reduction-in-force would be required by late fiscal year 1998. One element of the expected difficulty in 1999 is that about 70 percent of NASA’s planned personnel reductions in the 1996-2000 period are scheduled in 1999 and 2000, with most of those—1,730 out of 2,822—scheduled for 1999. A NASA personnel official explained that reductions were being scheduled for late in the period, in part, to allow sufficient time to work out the details of the conversion to a space shuttle single prime contract at Kennedy Space Center. With the difficult launch schedule associated with the space station, NASA officials were concerned about mission performance if they lowered personnel levels too quickly at Kennedy. One of NASA’s major concerns is ensuring a proper skill mix throughout the agency. Currently, NASA’s strategy to deal with this concern is to rely on normal attrition, limited hiring focused on the most critical areas, and redeploying employees. NASA officials intend to refine their workforce planning efforts later this year. They stated that these refinements will include developing more detailed demographic information and turnover predictions, identifying specific skill-mix requirements, determining skill excesses and shortages, developing cross-training and relocation opportunities, and implementing specific programs and policies to help achieve an appropriate skill mix for the 17,500 FTE level. NASA’s efforts to meet its planned FTE level while avoiding involuntary separations will be affected by the results of several management and operational changes, including the shifting of program management from headquarters to field centers and the use of a single prime contractor for managing the space shuttle at Kennedy Space Center. NASA is in the process of shifting program management control from its headquarters program offices to the field centers. Prior to the ZBR, the NPR recommended several management changes at NASA, including reducing its headquarters workforce by 50 percent, eliminating duplication of functions at headquarters and the centers, and reducing management layers. The ZBR, which was undertaken to develop strategies to meet funding reductions, proposed giving the centers increased management control. The ZBR defined the centers’ missions and designated each as a Center of Excellence; that is, having preeminence within the agency for a recognized area of technical competence. A center’s mission denotes its role or responsibility in supporting NASA’s five major enterprises: Mission to Planet Earth, Aeronautics, Human Exploration and Development of Space, Space Science, and Space Technology. All program implementation responsibilities previously performed by headquarters offices are being reassigned to the field centers. In essence, it is intended that headquarters focus on what the agency does and why, while centers focus on executing programs. Table 4 shows the proposed ZBR reductions for program and staff offices in headquarters, and table 5 shows proposed reductions by NASA installation as of March 1996. In November 1995, NASA selected United Space Alliance—a Rockwell International and Lockheed Martin partnership—as the prime contractor for space flight operations. Although NASA will retain responsibility for launch decisions, NASA personnel will be less involved in day-to-day operations. Thus, fewer civil servants will be required to manage the program. However, conversion efforts are still underway and have not reached the point where NASA officials are able to judge the full extent to which NASA personnel will be involved in overseeing the contractor’s operations. Despite this uncertainty, NASA estimates that it should be able to make personnel reductions in the range of 700 to 1,100 FTEs at the Kennedy Space Center. Because the length of the transition period is uncertain, NASA personnel officials show these reductions occurring in 1999 and 2000. However, NASA officials believe the personnel reductions at this center will not be precipitous, but will occur more gradually over the transition period. During the course of the ZBR, the concept of institutes was identified as a potentially beneficial approach to maintain or improve the quality of national science in the face of organizational streamlining. The recommendation was made to reshape NASA’s science program under a reinvention strategy to bind NASA’s science program more closely to the larger community that it serves. The strategy involved “privatization” of a portion of NASA’s science program into a number of science institutes. The purpose for establishing science institutes was to preserve and improve the quality of NASA’s contributions to national science in the face of reductions in the size of the federal workforce. Under its Science Institute Plan, NASA intended to select universities, not-for-profit organizations, or consortia to operate 11 institutes under competitively awarded contracts or cooperative agreements to conduct research supporting the specific missions of selected NASA field centers, among other purposes. NASA was working with OMB to identify ways to make the transition to institutes attractive to NASA personnel. Proposed legislation for the agency’s fiscal year 1997 authorization bill was sent to OMB. The legislation would have facilitated the institutes’ employing of NASA personnel by relaxing current laws that restrict the employment of former federal workers by the private sector and enabling NASA employees to retain the bulk of their federal retirement benefits should they accept an offer of institute employment. Each institute would make its own decisions on hiring NASA employees. This proposal was not favorably reviewed in the executive branch, in part because of concern that covering former NASA personnel with federal benefits after they became private-sector employees would set a precedent to do the same for other federal employees whose jobs are privatized. As shown in table 6, the potential loss of civil service work years as a result of creating science institutes would vary greatly from center to center. According to NASA officials, the extent to which NASA personnel would voluntarily leave to accept the institutes’ offers of employment would depend largely on the enactment of the proposed legislation designed to ease such transfers. Without such legislation, NASA officials believe that the number of employees voluntarily leaving NASA would likely be negligible. On June 7, 1996, the NASA Administrator announced that, due to objections to the proposed legislation from the Office of Government Ethics, the Office of Personnel Management, and OMB, efforts to establish new science institutes other than the Biomedical Research Institute at Johnson Space Center would be discontinued. The Administrator stated that NASA did not intend to migrate civil service functions and positions to institutes absent legislative relief. However, NASA will continue to consider alternative options to the proposed institutes. NASA recently requested buyout authority from Congress. We have previously reported that savings from buyouts generally exceed those from reductions-in-force and that savings from downsizing largely depend, among other things, on whether the workforce restructuring has been effectively planned. As previously noted, NASA is currently involved in developing future workforce plans to help ensure a proper skill mix to support its programs and activities. In commenting on a draft of this report, NASA said it had a human resource planning activity underway in support of its fiscal year 1998 budget request. We believe that the results of this effort would provide useful information to Congress in reviewing both NASA’s request for buyout authority and its fiscal year 1998 budget request. Therefore, Congress may wish to consider requiring NASA to submit a workforce restructuring plan for achieving its fiscal year 2000 FTE goal. NASA officials concurred with our report and stated that it is a good synopsis of the progress made and the problems remaining. NASA said that civil service staffing at the Kennedy Space Center may not be able to go below 1,360 FTEs. NASA indicated that it would reassess the size of the reduction in preparing its fiscal year 1998 budget request. NASA also summarized its reasons for wanting new buyout authority. NASA’s comments are included in appendix I. We researched NASA’s workforce history, reviewed NASA workforce statistics and centers’ and headquarters’ downsizing plans, examined workforce reviews and studies prepared by NASA discussing its downsizing activities, and discussed with NASA officials how the most recent reductions were achieved. We also examined projected workforce statistics through fiscal year 2000 and obtained information on NASA’s approach to achieving future downsizing goals. We reviewed workforce statistics from three field centers—Goddard Space Flight Center, Marshall Space Flight Center, and Lewis Research Center—and we reviewed the centers’ strategies for meeting future reductions. We relied primarily on information contained in NASA’s Civil Service Workforce Report for most of our statistical data. We did not independently verify NASA’s statistics. The civil service workforce totals discussed in this report reflect NASA’s planning at the time of our review. The likelihood they will continue to be revised to reflect changes is high. We conducted our review principally at NASA headquarters, Washington, D.C., and the Goddard Space Flight Center, Greenbelt, Maryland. We also discussed personnel-related issues with NASA officials at Marshall Space Flight Center, Huntsville, Alabama, and Lewis Research Center, Cleveland, Ohio. We performed our work from June 1995 to June 1996 in accordance with generally accepted government auditing standards. Unless you publicly announce its contents earlier, we plan no further distribution of this report until 14 days from its issue date. At that time, we will send copies of this report to appropriate congressional committees, the NASA Administrator, the Director of OMB, and other interested parties upon request. If you or your staff have any questions concerning this report, please contact me on (202) 512-4841. The major contributors to this report were Frank Degnan, Lawrence Kiser, and Roberta Gaston. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO examined the National Aeronautics and Space Administration's (NASA) efforts to downsize its staff. GAO found that: (1) NASA has reduced its fiscal year (FY) 2000 full-time equivalent (FTE) goal by more than 3,000 personnel; (2) NASA has provided eligible employees with voluntary separation incentive payments in exchange for their voluntary retirement or resignation; (3) two-thirds of the employees that left NASA in 1994 and 1995 took buyouts; (4) NASA will not be able to reduce its personnel levels by FY 2000 without invoking involuntarily separation measures; (5) NASA is relying on normal attrition, limited hiring, and redeployment to ensure a proper mix of skills throughout the agency; (6) NASA is shifting its program management control from headquarters to field centers and is using a single prime contractor to manage its space shuttle program at Kennedy Space Center; and (7) NASA would like to develop space science institutes to improve the quality of its science programs, but these efforts have been largely abandoned due to concerns regarding the transfer of NASA employees to institute positions.
Most federal civilian employees are covered by the Civil Service Retirement System (CSRS) or the Federal Employees’ Retirement System. Both of these retirement plans include survivor benefit provisions. Three separate retirement plans apply to various groups of judges in the federal judiciary, with JSAS being available to participants in all three retirement plans to provide annuities to their surviving spouses and children. Appendix I provides additional information regarding retirement plans that are available to federal judges. JSAS was created in 1956 to help provide financial security for the families of deceased federal judges. It provides benefits to surviving eligible spouses and dependent children of judges who participate in the plan. Judges may elect coverage within 6 months of taking office, 6 months after getting married, or 6 months after being elevated to a higher court, or during an open season authorized by statute. Active and senior judges currently contribute 2.2 percent of their salaries to JSAS, and retired judges contribute 3.5 percent of their retirement salaries to JSAS. Upon a judge’s death, the surviving spouse is to receive an annual annuity that equals 1.5 percent of the judge’s average annual salary during the 3 highest consecutive paid years (commonly known as the high-3) times the judge’s years of creditable service. The annuity may not exceed 50 percent of the high-3 and is guaranteed to be no less than 25 percent. Separately, an unmarried dependent child under age 18, or 22 if a full-time student, receives a survivor annuity that is equal to 10 percent of the judge’s high-3 or 20 percent of the judges’ high-3 divided by the number of eligible children, whichever is smaller. JSAS annuitants receive an annual adjustment in their annuities at the same time, and by the same percentage, as any cost-of-living adjustment (COLA) received by CSRS annuitants. Spouses and children are also eligible for Social Security survivor benefits. Since its inception in 1956, JSAS has changed several times. Because of concern that too few judges were participating in the plan (74 percent of federal judges participated in 1985, which was down from 90 percent in 1976), Congress made broad reforms effective in 1986 with the Judicial Improvements Act of 1985. The 1985 act (1) increased the annuity formula for surviving spouses from 1.25 percent to the current 1.5 percent of the high-3 for each year of creditable service and (2) changed the provisions for surviving children’s benefits to relate benefit amounts to judges’ high-3 rather than the specific dollar amounts provided in 1976 by the Judicial Survivors’ Annuities Reform Act. In recognition of the significant benefit improvements that were made, the 1985 act increased the amounts that judges were required to contribute from 4.5 percent to 5 percent of their salaries, including retirement salaries. The 1985 act also changed the requirements for government contributions to the plan. Under the 1976 Judicial Survivors’ Annuities Reform Act, the government matched the judges’ contributions of 4.5 percent of salaries and retirement salaries. The 1985 act modified this by specifying that the government would contribute the amounts necessary to fund any remaining cost over the future lifetime of current participants. That amount is limited to 9 percent of total covered salary each year. Despite the benefit improvements in the 1985 act, the rate of participation in JSAS continued to decline. In 1991, the rate of participation was about 40 percent overall and 25 percent for newly appointed judges. In response to concerns that required contributions of 5 percent may have created a disincentive to participate, Congress enacted the Federal Courts Administration Act of 1992. Under this act, participants’ contribution requirements were reduced to 2.2 percent of salaries for active and senior judges and 3.5 percent of retirement salaries for retired judges. The 1992 act also significantly increased benefits for survivors of retired judges. This increase was accomplished by including years spent in retirement in the calculation of creditable service and the high-3 salary averages. Additionally, the 1992 act allowed judges to stop contributing to the plan if they ceased to be married and granted benefits to survivors of any judge who died in the interim between leaving office and the commencement of a deferred annuity. As of September 30, 2004, there were 1,329 active and senior judges, 207 retired judges, and 304 survivor annuitants covered under JSAS, compared with 1,265 active and senior judges, 193 retired judges, and 283 survivor annuitants as of September 30, 2002. AOUSC is responsible for administering and maintaining reliable information on JSAS. JSAS is financed by judges’ contributions and direct appropriations in an amount estimated to be sufficient to fund future benefits paid to survivors of current and deceased participants. The federal government’s contribution is approved through an annual appropriation and is not based on a rate or percentage of the judges’ salaries. To determine the annual contribution of the federal government, AOUSC engages an enrolled actuary to perform the calculation of funding needed based on the difference between the present value of the expected future benefit payments to participants and the value of net assets in the plan. Appendix II provides more details on the formulas used to determine participants’ and the federal government’s contributions and lump sum payments. The cost of a retirement or survivor benefit plan is typically not measured by annual expenditures for benefits. Such expenditures are not an indicator of the overall long-term cost of a plan. The more complete calculation of a plan’s cost is the present value of projected future outlays to retirees or survivors, based on the current pool of participants, with such costs allocated annually. This annual cost allocation is referred to as the normal cost. Normal cost calculations, prepared by an actuary, are estimates and require that many actuarial assumptions be made about the future, including mortality rates, turnover rates, returns on investment, salary increases, and COLA increases over the life spans of current participants and beneficiaries. The plan’s actuary, using the plan’s funding method—in this case, the aggregate cost method—determines the plan’s normal cost. Under the aggregate cost method, the normal cost is the level percentage of future salaries that will be sufficient, along with investment earnings and the plan’s assets, to pay the plan’s benefits for current participants and beneficiaries. There are many acceptable actuarial methods for calculating normal cost. Regardless of which cost method is chosen, the expected total long-term cost of the plan should be the same; however, year-to-year costs may differ, depending on the cost method used. Our objectives were to determine whether participating judges’ contributions for the 3 plan years ending on September 30, 2004, funded at least 50 percent of the JSAS costs and, if not, what adjustments in the contribution rates would be needed to achieve the 50 percent ratio. To satisfy our objectives, we examined the normal costs reported in the JSAS annual report submitted by AOUSC to the Comptroller General for plan years 2002 through 2004. We also examined participants’ contributions, the federal government’s contribution, and other relevant information in each annual report. An independent accounting firm hired by AOUSC audited the JSAS financial and actuarial information included in the JSAS annual reports, with input from an enrolled actuary regarding relevant data, such as actuarial present value of accumulated plan benefits. An enrolled actuary certified those amounts that are included in the JSAS annual reports. We discussed the contents of the JSAS reports with officials from AOUSC for the 3 plan years (2002 through 2004). In addition, we discussed with the enrolled actuary the actuarial assumptions made to project future benefits of the plan. We did not independently audit the JSAS annual report or the actuarially calculated cost figures. We performed our review in Washington, D.C., from May 2005 through July 2005, in accordance with U.S. generally accepted government auditing standards. We made a draft of this report available to the Director of AOUSC for review and comment. The Director’s comments are reprinted in appendix III. For each of the JSAS plan years 2002 through 2004, participating judges funded more than 50 percent of the JSAS normal costs. In plan year 2002, participating judges paid approximately 75 percent of JSAS normal costs, and in plan years 2003 and 2004, they paid approximately 64 and 78 percent of JSAS normal costs, respectively. On the basis of data from plan years 2002, 2003, and 2004, participating judges paid, on average, approximately 72 percent of JSAS normal costs while the federal government’s share amounted to approximately 28 percent. Table 1 shows judges’ and the federal government’s contribution rates and shares of JSAS normal costs (using the aggregate cost method, which is discussed in app. II) for the period covered in our review. The judges’ and the federal government’s contribution rates for each of the 3 years, shown in table 1, were based on the actuarial valuation that occurred at the end of the prior year. For example, the judges’ contribution rate of 2.39 percent and the federal government’s contribution rate of 0.80 percent in plan year 2002 were based on the September 30, 2001, valuation contained in the plan year 2002 JSAS report. The judges’ contribution of JSAS normal costs shown in table 1 fluctuated from approximately 75 percent in plan year 2002, to approximately 64 percent in plan year 2003, and to 78 percent in plan year 2004. The federal government’s contribution of JSAS normal costs also varied, from approximately 25 percent in plan year 2002, to approximately 36 percent in plan year 2003, and to approximately 22 percent in plan year 2004. During those same years, judges’ contribution rates remained almost constant, while the federal government’s contribution rate increased from 0.80 percent of salaries in plan year 2002 to 1.34 percent of salaries in plan year 2003, and then decreased to 0.65 percent in plan year 2004. The variance in the federal government’s contribution rates was a result of the fluctuation in normal costs resulting from several combined factors, such as changes in assumptions; lower-than-expected return on plan assets; demographic changes—retirement, death, disability, new members, and pay increases; as well as an increase in plan benefit obligations. Specifically, the value of total plan assets increased from $473.8 million in plan year 2002 to $484.0 million in plan year 2003, and then decreased to $479.8 million in plan year 2004. However, accumulated plan benefit obligations increased steadily, from $385.4 million in plan year 2002, to $388.5 million in plan year 2003, and to $393.9 million in plan year 2004. Although the judges’ contribution rate remained fairly constant, their contribution of normal costs rose to approximately 78 percent in plan year 2004 because total normal costs decreased. During 2004 plan year, contributions from the federal government and judges totaled almost $5.1 million, somewhat less than the actuarial cost of $6.9 million. A primary reason for the difference between total contributions and the plan’s actuarial cost was that the approximately 1.3 percent return on the market value of plan assets was lower than the 6.25 percent assumed rate of investment return on plan assets. The resulting actuarial loss increased the required contribution level for the plan by 0.82 percent of total payroll for participating judges. Based on information in JSAS actuarial reports for the 3 years under review, we have determined that participating judges’ future contributions do not have to increase in order to cover the minimum 50 percent of JSAS costs required by the Federal Courts Administration Act. We found that the current contribution rates of 2.2 percent of salaries for active and senior judges and 3.5 percent of retirement salaries for retired judges are sufficient to cover at least 50 percent of JSAS costs. As shown in table 1, the judges’ average contribution for JSAS costs for this review period was approximately 72 percent, which exceeded the 50 percent contribution goal for judges. Because future normal costs are estimates that may change in any given year, adjusting judges’ contribution rates whenever they are found to be generating more or less than 50 percent of JSAS costs is not practical. Future normal costs may change because of certain events that occur during the course of a year, such as the number of survivors or judges who die, the number of new judges electing to participate in JSAS, and the number of judges who retire, and because the values of, and rates of return on, plan assets could create normal statistical variances that would affect the annual normal costs of the plan. Because the plan has only 1,536 participants and 304 survivor annuitants, such variances can have a significant effect on expected normal costs and lead to short-term variability. Therefore, it is important to take a long-term view when evaluating whether contribution rates for judges are appropriate to achieve a 50 percent JSAS contribution share for judges. For example, as shown in table 2, although the judges’ contribution share for plan year 2004 was approximately 78 percent, the judges’ average contribution share for plan years 1996 through 2004 was approximately 55 percent—significantly closer to the 50 percent contribution goal. Another drawback to making frequent changes to the judges’ contribution rate in response to short-term fluctuations in their contribution share could be a decline in JSAS participation. Increasing participation was a major reason for the changes made to JSAS in 1992. From plan years 1998 through 2004, the number of judges participating in JSAS increased 8 percent, from 1,420 to 1,536. We requested comments on a draft of this report from the Director of AOUSC or his designee. In a letter dated August 23, 2005, the Director provided written comments on the report, which we have reprinted in appendix III. AOUSC also provided technical comments, which we have incorporated as appropriate. In its comments, AOUSC stated that our report showed that judges’ contributions to JSAS have become disproportionately high, but that we were not suggesting a change in the contribution rate for judges. Specifically, AOUSC stated that we did not present in our report the adjustment that would be needed to the participating judges’ contribution rates to achieve the 50 percent funding of the program’s costs by the judges. In AOUSC’s view, this omission is not consistent with Congress’s intent in enacting the Federal Courts Administration Act of 1992. We disagree with AOUSC’s view as to the purpose of section 201(i) of the act. Since enactment, we have interpreted this section as providing a minimum percentage of the costs of the program to be borne by its participants because the statute requires us to recommend adjustments when the judges’ contributions have not achieved 50 percent of the costs of the fund. We do not view the section as calling for parity between the participants and the federal government with respect to funding the program. Thus, for the 3 years covered by this review, we determined and reported that judges’ contributions funded approximately 72 percent of normal costs of JSAS, and therefore, an adjustment to the judges’ contribution rates was not needed under the existing legislation because the judges’ contributions achieved 50 percent of JSAS costs. We have consistently applied this interpretation of the act’s requirement in all of our previous mandated reviews. However, if one were to interpret the act as calling for an equal sharing of the program’s costs between participants and the government, then, on the basis of the information contained in the JSAS actuarial report as of September 30, 2004, participating judges’ future contributions would have had to decrease a total of 0.86 percentage points below the current 2.2 percent of salaries for active judges and senior judges and 3.5 percent of retirement salaries for retired judges in order to fund 50 percent of JSAS costs over the past 3 years. If the decrease were distributed equally among the judges, those currently contributing 2.2 percent of salaries would have had to contribute 1.34 percent, and those currently contributing 3.5 percent of retirement salaries would have had to contribute 2.64 percent. As we have noted both in this report and prior reports, because of the yearly fluctuations that are experienced by JSAS, short-term trends are not sufficient for use in making informed decisions. As we stated in our report, future normal costs may change because of certain events that occur during the course of a year, such as the number of survivors or judges who die, the number of new judges electing to participate in JSAS, and the number of judges who retire. Also, the values of, and rates of return on, plan assets could create normal statistical variances that would affect the annual normal costs of the plan. Therefore, it is important to take a long- term view when evaluating whether rates for judges are appropriate to achieve a 50 percent minimum JSAS contribution share for judges. We are sending copies of this report to the Director of AOUSC. Copies of this report will be made available to others upon request. This report is also available at no charge on the GAO Web site at http://www.gao.gov. Please contact Steven J. Sebastian at (202) 512-3406 or [email protected] if you or your staff have any questions concerning this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report were Hodge Herry, Assistant Director; Joseph Applebaum; Jacquelyn Hamilton; Amy Bowser; and Kwabena Ansong. The Administrative Office of the United States Courts (AOUSC) administers three retirement plans for judges in the federal judiciary. The Judicial Retirement System automatically covers United States Supreme Court justices, federal circuit and district court judges, and territorial district court judges and is available, at their option, to the Administrative Assistant to the Chief Justice, the Director of AOUSC, and the Director of the Federal Judicial Center. The Judicial Officers’ Retirement Fund is available to bankruptcy and full- time magistrate judges. The United States Court of Federal Claims Judges’ Retirement System is available to the United States Court of Federal Claims judges. Also, except for judges who are automatically covered under the Judicial Retirement System, judges and judicial officials may opt to participate in the Federal Employees’ Retirement System (FERS) or elect to participate in the Judicial Retirement System for bankruptcy judges, magistrate judges, or United States Court of Federal Claims judges. Judges who retire under the judicial retirement plans generally continue to receive the full salary amounts that were paid immediately before retirement, assuming the judges met the age and service requirements. Retired territorial district court judges generally receive the same cost-of- living adjustment that Civil Service Retirement System retirees receive, except that their annuities cannot exceed 95 percent of an active district court judge’s salary. United States Court of Federal Claims judge retirees continue to receive the same salary payable to active United States Court of Federal Claims judges. Those in the Judicial Retirement System and the United States Court of Federal Claims Judges’ Retirement System are eligible to retire when the number of years of service and the judge’s age total at least 80, with a minimum retirement age of 65, and service ranging from 10 to 15 years. Those in the Judicial Officers’ Retirement Fund are eligible to retire at age 65 with at least 14 years of service or may retire at age 65 with 8 years of service, on a less than full salary retirement. Participants in all three judicial retirement plans are required to contribute to and receive Social Security benefits. Aggregate funding method. This method, as used by the Judicial Survivors’ Annuities System (JSAS) plan, defines the normal cost as the level percentage of future salaries that will be sufficient, along with investment earnings and the plan’s assets, to pay the plan’s benefits for current participants and beneficiaries. The formula is as follows: The present value of future normal costs (PVFNC) equals the present value of future benefits less net asset value. PVFNC is the amount that remains to be financed by judges and the federal government. The normal cost (NC) percentage equals PVFNC divided by present value of future salaries. Federal government contribution. The following formula is used to determine the federal government’s contribution amount: The federal government contribution represents the portion of NC not covered by participants’ contributions. Lump sum payout. Under JSAS, a lump sum payout may occur upon the dissolution of marriage either through divorce or death of spouse. Payroll contributions cease, but previous contributions remain in JSAS. Also, if there is no eligible surviving spouse or child upon the death of a participating judge, the lump sum payout to the judge’s designated beneficiaries is computed as follows: Lump sum payout equals total amount paid into the plan by the judge plus 3 percent annual interest accrued less 2.2 percent of salaries for each participating year (forfeited amount). In effect, the interest plus any amount contributed in excess of 2.2 percent of judges’ salaries will be refunded.
The Judicial Survivors' Annuities System (JSAS) was created in 1956 to provide financial security for the families of deceased federal judges. It provides benefits to eligible spouses and dependent children of judges who elect coverage within 6 months of taking office, 6 months after getting married, or 6 months after being elevated to a higher court, or during an open season authorized by statute. Active and senior judges currently contribute 2.2 percent of their salaries to JSAS, and retired judges contribute 3.5 percent of their retirement salaries to JSAS. Pursuant to the Federal Courts Administration Act of 1992 (Pub. L. No. 102-572), GAO is required to review JSAS costs every 3 years and determine whether the judges' contributions fund 50 percent of the plan's costs. If the contributions fund less than 50 percent of these costs, GAO is to determine what adjustments to the contribution rates would be needed to achieve the 50 percent ratio. GAO is not making any recommendations in this report. The Administrative Office of the United States Courts (AOUSC) believes that GAO should be recommending a reduction in the judges' contribution rate. GAO disagrees with AOUSC's interpretation of the act's requirements. During plan years 2002 through 2004, the participating judges' contributions funded more than 50 percent of the JSAS normal costs. The participating judges funded approximately 75 percent of JSAS normal costs during plan year 2002, 64 percent during plan year 2003, and 78 percent during plan year 2004. On average over the 3-year period, the participating judges funded approximately 72 percent of JSAS normal costs, while the federal government funded approximately 28 percent. The variance in the government's contribution rates was a result of the fluctuation in normal costs resulting from several combined factors, such as changes in assumptions; lower-than-expected rates of return on plan assets; demographic changes--retirement, death, disability, new members, and pay increases; as well as an increase in plan benefit obligations. For the 3 years covered by the review, GAO determined that an adjustment to the judges' contribution rate was not needed because their average contribution share for the review period was approximately 72 percent, which exceeded the minimum 50 percent contribution goal specified by law. In addition, GAO examined the annual share of normal costs covered by judges' contributions over a 9-year period and found that on average the participating judges funded approximately 55 percent of JSAS's normal costs.
The Capitol Hill anthrax incident occurred a month after the terrorist attacks on the World Trade Center and the Pentagon, while EPA and other federal agencies were continuing to respond to these attacks. The Capitol Police Board, which governs the U.S. Capitol Police Force, led the anthrax cleanup at the Capitol Hill site. Consisting at the time of our review of the House and Senate Sergeants-at-Arms and the Architect of the Capitol, the Board oversees the security of members of the Congress and the Capitol buildings, such as the congressional office buildings. The federal entities involved in the cleanup—including EPA, the Federal Emergency Management Agency, the Centers for Disease Control and Prevention, the U.S. Coast Guard, and the Department of the Army—reported to an incident commander who was appointed by the Capitol Police Board to make decisions on the day-to-day activities of the cleanup. The period from October 20, 2001, to November 13, 2001, is characterized as the emergency phase, which focused on identifying the extent of anthrax contamination; this was followed by the remedial, or cleanup, phase. Reporting to the Capitol Police Board’s incident commander, EPA managed the decontamination aspects of the cleanup. EPA’s activities at the Capitol Hill site included working with other agencies and entities to evaluate the effectiveness of potential disinfectants and cleanup technologies, isolating areas to prevent the spread of contamination, sampling to determine and confirm the extent of contamination (see fig. 1), removing critical items for special decontamination procedures, and cleaning up the contaminated areas and disposing of decontaminated items. At the Capitol Hill site, EPA sampled both surfaces and air in the buildings for the presence of anthrax, using three types of surface samples (wet swabs and wipes for nonporous surfaces and high efficiency particulate arresting (HEPA) vacuuming for porous materials) and four types of air samples. Four methods were used to remove anthrax found in congressional buildings: fumigating with chlorine dioxide gas, an antimicrobial pesticide; disinfecting with a liquid form of chlorine dioxide; disinfecting with Sandia foam; and using HEPA vacuuming (see fig. 2). During the cleanup, chlorine dioxide gas was identified as the best available fumigant for decontaminating parts of the Hart Senate Office Building, as well as for fumigating mail and packages. EPA oversaw the use of chlorine dioxide gas during three fumigation events in the Hart building. In addition, contractors removed items from congressional offices that were critical to congressional operations or personal effects of significance. These items were bagged, tagged, and moved for off-site decontamination. Approximately 3,250 bags of critical items were transported to a company in Richmond, Virginia, for decontamination treatment using ethylene oxide. Approximately 4,000 packages and other mail were collected from the mail rooms in congressional office buildings and also transported off site for decontamination using chlorine dioxide gas. In addition, drums of mail were sent to a facility in Lima, Ohio, for irradiation treatment. The Capitol Hill anthrax cleanup site included 26 buildings, most of them located in or near the Capitol Hill area of Washington, D.C. The buildings that required testing for anthrax contamination included congressional and judicial buildings; mail facilities; and other nearby buildings, such as the Library of Congress. Initial sampling was conducted along the route traveled by the letter opened in the Hart Building by tracing the route back to the Dirksen Senate Office Building (where the mail for the Senate is processed), to the P Street Warehouse (a restricted mail inspection facility overseen by the Capitol Police where congressional mail is inspected), and finally to the Brentwood postal facility (the U.S. Postal Service mail processing and distribution center for Washington, D.C.). Samples from 7 of the 26 buildings were found to contain anthrax, which required that these 7 undergo more thorough sampling, followed by decontamination, and followed then by resampling to confirm that the anthrax had been eradicated. In total, approximately 10,000 samples were taken at the Capitol Hill site, about half of them from locations in the Hart Senate Office Building. EPA advised the Capitol Police Board’s incident commander about the extent to which buildings needed to be cleaned to make them safe. EPA, along with the Centers for Disease Control and Prevention, the Agency for Toxic Substances and Disease Registry, the National Institute for Occupational Safety and Health, and other relevant authorities, determined that the cleanup standard that would be fully protective of public health and the environment was “no detectable, viable anthrax spores.” The seven buildings that required decontamination were the Dirksen, Hart, and Russell Senate Office Buildings; the Ford and Longworth House Office Buildings; the U.S. Supreme Court Building; and the P Street Warehouse. Six of the seven buildings were cleared for reentry by the end of January 2002. The P Street Warehouse was cleared for reentry in March 2002. According to the lead EPA on-scene coordinator, no one became sick as a result of exposure to anthrax or chemical agents used during decontamination. EPA performed its work on the Capitol Hill anthrax cleanup under its Superfund program pursuant to the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) and the National Oil and Hazardous Substance Pollution Contingency Plan (NCP). Provisions of CERCLA, as amended, promote a coordinated federal, state, and local response to mitigate situations at sites that may pose an imminent and substantial threat to public health or the environment. The NCP is the federal government’s blueprint for responding to both oil spills and hazardous substance releases. It requires that an on-scene coordinator manage the federal response at the scene of a discharge of oil or a release of a hazardous substance that poses a threat to public health or the environment. The on-scene coordinator coordinates all federal efforts with, and provides support and information to, local, state, and regional response communities. Depending on where an incident occurs, the on- scene coordinator may be either an EPA or U.S. Coast Guard employee. EPA’s Superfund work typically involves using agency personnel and contractors from 1 of 10 EPA regions located throughout the country that have experience with the hazardous substances involved in the incident and the methods required to remove them. Removal actions are generally short-term, relatively inexpensive responses to releases or threats of releases of hazardous substances, pollutants, or contaminants that pose a danger to human health, welfare, or the environment. CERCLA generally limits the cost of a removal action to $2 million and the duration to 1 year. However, CERCLA exempts certain removal actions from these limitations, such as when continued response is required immediately to prevent, limit, or mitigate an emergency. EPA approved an emergency exemption to the $2-million statutory limit for the Capitol Hill anthrax cleanup on November 5, 2001. Typically, EPA provides one on-scene coordinator for a removal site to perform an initial assessment of the cleanup work needed, monitor the more detailed technical assessment and cleanup work being performed by EPA personnel and one or two contractors, and evaluate the results. However, the Capitol Hill site response was different from most hazardous materials emergency responses in its size and complexity, the nature of the contamination, and the requirement that the closed congressional buildings be reopened as soon as possible. As a result, EPA had to use a large number of on-scene coordinators, major contracts, and other federal agencies for assistance. In this case, EPA’s Mid-Atlantic Regional Office (Region III) provided the lead on-scene coordinator, who led the agency’s cleanup efforts. Region III, along with eight other regions, also provided about 50 other on-scene coordinators. Further, unlike most EPA cleanups, the lead on-scene coordinator was not in charge of the overall operations but instead reported to the incident commander, who in turn reported to the Capitol Police Board and House and Senate leaders. A substantial portion of the cleanup work at the Capitol Hill site was performed from October 2001 through January 2002, with most of the remaining work finished by April 2002. However, some additional costs have been incurred, and EPA personnel continued to work on activities related to the cleanup after April 2002. For example, the final disposal of items used at the cleanup continued after the buildings had been reopened. In addition, EPA conducted several internal reviews to identify lessons learned from this experience to help the agency prepare for responses to other potential biological or chemical weapons attacks. According to EPA, the agency expended about $27 million on the Capitol Hill anthrax cleanup, using Superfund program funding. Through fiscal year 2002 supplemental appropriations acts, the Congress provided EPA with additional funding for activities related to terrorism, and EPA allocated about $23 million of these funds to reimburse the Superfund program for expenditures associated with the Capitol Hill anthrax cleanup. Overall, EPA dedicated what it describes as unprecedented resources—contract staff and EPA personnel—to accomplish the cleanup of the anthrax site safely and effectively. Ninety-three percent of the $27 million in costs were incurred primarily by EPA contractors who, among other things, conducted technical assessments and performed the decontamination tasks at the various Capitol Hill sites; the remaining 7 percent of costs were incurred by EPA personnel, largely for planning and overseeing the work of the contractors in accordance with the direction provided by the Capitol Police Board. Over the course of the cleanup, EPA revised its cost estimates several times as the nature and extent of the contamination became fully known and the solutions for removing and properly disposing of the anthrax were agreed upon and carried out. EPA’s various cost estimates covered the contracts and government agreements and generally do not include the payroll and travel costs associated with EPA personnel assigned to the Capitol Hill site. In November 2001, EPA increased its initial estimate for the cleanup to $5 million—more than doubling the initial statutory limit of $2 million. EPA revised its estimate for the cleanup five more times to continue work necessary to control and mitigate the threat of release of anthrax to the environment and to properly dispose of pollutants and contaminants from the site. The last revision—an increase from $25 million to $28 million—occurred in June 2002. (See table 1.) EPA adjusted its projections during the course of the cleanup as a result of a number of factors generally related to the uniqueness of the situation— the first use of anthrax as a terrorist weapon in this country. EPA had not addressed anthrax contamination in buildings previously and protocols for responding to contamination by anthrax or other biological agents did not exist. In addition, some scientific and technical information needed to properly plan and conduct the anthrax cleanup was not readily available; and EPA did not, at that time, have registered antimicrobial agents approved for use against anthrax. Also, EPA had not compared the costs of candidate decontamination methods. Further, much was—and still is— unknown about the properties of lab-produced anthrax such as that used in this incident, which led to uncertainties about the health risks posed by the contamination and how it could spread. As a result, EPA and contractors had to develop plans for decontaminating large areas within buildings with limited practical knowledge; search for decontamination methods; assess their likely efficacy; implement them; and, at times, repeat the process if the methods did not work. Finally, EPA was one of a number of participants in the decisions made about the work to be done, the timing of the work, and the resources needed; it was not the primary decision maker as it would be in a typical Superfund cleanup. As EPA and contractor staff were beginning their work at the Capitol Hill anthrax site, the limitations of existing knowledge about the health risks associated with anthrax—such as what amount of exposure could cause illness or death—were becoming more clear. That the Capitol Hill site was potentially riskier than initially believed became evident when workers in the postal facilities where anthrax-laced letters were processed became ill; two of them subsequently died of inhalation anthrax. The scientific and medical information initially available to EPA and other agencies indicated that workers in postal facilities were not at risk of infection. Further, an elderly Connecticut woman—who may have been exposed to mail that had been contaminated with anthrax—died from anthrax inhalation, and a New York woman whose exposure to anthrax could not be linked to any mail or mail facilities also died. To accomplish the cleanup safely in the midst of significant scientific and technical uncertainty and changing information about how anthrax spreads, EPA called on about 150 of its staff in headquarters and the regions, incurring agency payroll and travel costs of $1.9 million—payroll costs amounted to $1.3 million and travel costs to about $600,000. According to our analysis of EPA’s Office of the Chief Financial Officer records, the majority of payroll and travel costs were incurred by on-scene coordinators from EPA’s regions who were overseeing and assisting on the cleanup. Further, EPA employed 27 contractors and obtained further support from three government agencies at a total cost of about $25 million to provide assessment and cleanup services. These costs are discussed in the next section. Because of the magnitude and urgency of the health threat and the high priority placed on reopening the congressional buildings as soon as possible to mitigate disruptions to the functioning of the federal government, the Capitol Hill anthrax cleanup conducted by EPA and other federal agencies was accomplished fairly quickly, with the majority of contaminated buildings opened for business in about 3 months. Without the emphasis on reopening the buildings, for example, the cleanup site likely would not have been operated around the clock, 24/7, for months. In contrast, testing and decontamination of some buildings at other sites have taken much longer. For example, fumigation of the Brentwood postal facility was completed in March 2003, and this facility had not reopened as of May 2003. In addition, a news media building in Boca Raton, Florida, where the first letter containing anthrax was received in September 2001, remained closed as of May 2003. Almost all of the cleanup expenses–81 percent—paid to EPA’s 27 contractors and 3 government agencies were incurred under competitively awarded contracts. For example, $20.3 million of the approximately $25 million total expenditures under contracts and government agreements were incurred under 10 existing, competitively awarded contracts that EPA routinely uses under the Superfund program to respond to releases or the threat of releases of hazardous substances, pollutants, or contaminants that may present imminent and substantial danger to the public health or welfare. Most of the contracts that were not competitively awarded cost less than $200,000 and provided supplies and technical services. For additional assistance, EPA also entered into agreements with two federal agencies and one state agency. (See fig. 3.) When responding to a release of hazardous substances, EPA first relies on its existing Superfund contracts. The Competition in Contracting Act of 1984 generally requires contracting agencies to obtain full and open competition through the use of competitive procedures, the dual purposes of which are to ensure that procurements are open to all responsible sources and to provide the government with the opportunity to receive fair and reasonable prices. In order to respond to emergencies involving releases of hazardous substances quickly, EPA issues competitively awarded multiyear Superfund contracts so that contractors with the necessary expertise are available on short notice when needed. The 10 EPA regions each negotiate and manage these Superfund contracts for work in their geographic area. EPA generally uses two types of contracts in an emergency response: technical contracts provide technical assistance for EPA’s site assessment and removal activities, and removal contracts provide emergency, time-critical removal services. EPA used 10 existing, competitively awarded Superfund contracts for most of the technical assessment and anthrax removal at the Capitol Hill site: 4 technical contracts, 4 removal contracts, 2 other contracts that provided specific technical services and support; and issued 2 additional contracts for security services and supplies that were competitively awarded. (See table 2.) The 10 existing contracts had been in place for up to 4 years when the anthrax incident occurred. While EPA’s Region III issued the Superfund contracts that incurred the most costs for the Capitol Hill anthrax cleanup, contracts from other regions were also used to augment Region III contracting resources. The 10 existing Superfund contracts accounted for $20.3 million—or about 80 percent—of the total contract and government agreement costs for the Capitol Hill cleanup. The four EPA technical contracts for the Capitol Hill anthrax cleanup, among other things, provided decontamination plans and sampled for anthrax in buildings. According to an EPA contracting official in Region III, technical contracts typically account for about 10 percent of total contract costs at a cleanup site. However, technical contracts costs for the Capitol Hill site totaled about $7 million—or about 28 percent of the total contract costs. The four EPA removal contracts for the Capitol Hill anthrax cleanup provided personnel, equipment, and materials to remove items from the site for safekeeping, decontaminate areas where anthrax was found, and dispose of contaminated items. These removal contracts also provided equipment and personnel to conduct sampling because of the large amount of samples that were required and the short time frames involved. The four EPA removal contract costs totaled about $10 million. The other existing EPA contracts provided either specific technical services or support. One contract, which provides engineering and analytical services to EPA, monitored the air to ensure that potentially harmful decontamination chemicals were not released outside the area in which they were being used. Another contract, typically used for long-term Superfund cleanups known as remedial cleanups, provided additional technical support, including sampling analysis and data evaluation at the site. These two contracts totaled $3 million. Federal contracting laws that generally require EPA to use a competitive bidding process permit some exceptions to this requirement, including emergency situations where there is an unusual or compelling urgency for obtaining the necessary supplies or services. On this basis, in November 2001, EPA’s Office of Acquisition Management gave the EPA contracting officers the authority to enter into contracts for the Capitol Hill anthrax site without using the normal competitive bidding process. Overall, EPA used 15 noncompetitively awarded contracts—that is, sole-source contracts—for supplies and for technical, removal, and laboratory services to support the cleanup of the Capitol Hill anthrax site. As shown in table 3, costs for three of the sole-source contracts exceeded $200,000, and many of them were for considerably less. The largest noncompetitive contract used for the cleanup was with Kemron Environmental Services, Inc. Kemron provided EPA with HEPA vacuuming services, one of the four methods used to remove anthrax at the Capitol Hill site. EPA obtained the services of Kemron under the GSA federal supply schedule, relying on GSA’s determination that the prices offered under the GSA contract were fair and reasonable. The second largest noncompetitive contract was with the removal contractor HMHTTC Response Team, which provided additional workers in December 2001 to relieve the removal contractors who had worked at the site since October. The other sole-source contract over $200,000 was with Southwest Research Institute, a laboratory that analyzed spore strips used to test for anthrax after the decontamination efforts. This particular laboratory was selected because it was familiar with the protocol developed by the technical consultant who developed the spore strips. In addition, according to EPA officials, the lab could handle the quantity of spore strips the cleanup generated, it promised a quick turnaround time, and the fee was reasonable. The other noncompetitively awarded contracts used at the Capitol Hill site were for supplies needed for the contractors working at the site, such as respirators, air quality meters, and sampling kits, and for technical and removal and laboratory services. For example, one technical contractor, U.S. Art Company, Inc., provided advice regarding the removal and decontamination of art objects in the Capitol Hill buildings. Appendix I provides details on the tasks performed under the competitively and noncompetitively awarded contracts. EPA obtained further support through two federal interagency agreements and one state agreement. EPA amended an existing interagency agreement with the U.S. Coast Guard to respond quickly to the Capitol Hill anthrax contamination. The U.S. Coast Guard National Strike Force provided tactical entry teams, specialized equipment, management support, and a deputy to the incident commander during the emergency phase of the cleanup. EPA also entered into a new interagency agreement with the U.S. Department of the Army for waste incineration services at Fort Detrick, Maryland. In addition, EPA used the State of Maryland Department of the Environment to review work plans and help coordinate EPA’s removal and disposal of anthrax. (See table 4.) EPA dedicated significant staff resources to overseeing the many contractors working on the Capitol Hill anthrax cleanup. Specifically, about 50 EPA staff ensured the contractors were on site and performing assigned tasks appropriately. In addition, EPA assigned an administrative specialist to ensure contract charges were accurate and reasonable. After the cleanup, EPA assessed its response to the Capitol Hill anthrax incident and concluded that, overall, it had effectively used its contracting resources. However, EPA also identified ways it could improve contract support for potential future emergency responses. Moreover, our review of the Capitol Hill anthrax incident revealed inconsistencies in oversight practices that could affect the quality of EPA’s contract cost oversight, such as the extent to which regions use the computerized cost-tracking system, the extent to which they assign dedicated administrative specialists to cleanup sites to oversee costs, and regions’ varying approaches to reviewing cost reports for technical contracts. EPA used emergency technical assessment and hazardous substance removal contractors to conduct the cleanup and dedicated significant staff resources to overseeing their work. Reporting to the Capitol Police Board, EPA staff provided extensive technical expertise in anthrax detection and removal to ensure that the Capitol Hill cleanup protected public health and the environment. In all, according to EPA’s Office of the Chief Financial Officer’s payroll list, about 150 EPA staff participated in the anthrax cleanup, including about 50 staff from nine regional offices who are experienced in leading and overseeing emergency environmental cleanup operations—the on-scene coordinators—and several staff from EPA’s Environmental Response Team who also have experience in emergency cleanup operations. The on-scene coordinators oversaw, and sometimes assisted with, the work of the contractors during shifts that ran 24 hours a day, 7 days a week, for about 3 months. Fifty-six EPA staff whose responsibilities at the Capitol Hill site included overseeing contractors responded to our survey about the oversight activities they performed. They reported that their tasks varied but that the task they most frequently carried out was overseeing contractors. Specifically, the EPA respondents to our survey spent, on average, 53 percent of their time overseeing contractors; 18 percent researching and developing technical plans; 13 percent coordinating with other federal agencies on the administration of the cleanup; and 14 percent on “other activities,” such as conducting pilot studies for the decontamination effort, sampling for anthrax, and organizing and administering cleanup activities. The EPA staff who reported overseeing contractors spent, on average, 54 percent of their time observing contractors to ensure they were on site and working on assigned tasks efficiently. These staff also spent, on average, 17 percent of their time reviewing the results of contractors’ work, and 8 percent of their time preparing daily or weekly work plans. Less frequently, staff who reported oversight activities also monitored delivery and quality of supplies, reviewed cost documents, and approved hours worked by contract personnel. While EPA staff who reviewed cost documents spent, on average, 3 percent of their time reviewing cost documents, one person—a site administrative officer—spent 100 percent of his time reviewing cost documents. As discussed in the following section, Region III generally uses site administrative officers to review both technical and removal contract costs in detail and to document these reviews before the on-scene coordinator reviews and approves them, thereby easing the cost-review workload of on-scene coordinators and allowing them to focus more on other cleanup management tasks and issues. At the Capitol Hill anthrax site, the site administrative officer reviewed the daily charges for four of the six removal contracts, which represented about 41 percent of the total contract costs. These reviews involved verifying the hours the contractor staff worked by comparing the hours billed with the hours recorded in sign-in sheets; reviewing travel costs to ensure they were within federal guidelines and reviewing other expenditures of contractor staff, such as telephone charges to ensure they were allowable. The review work papers provide documentation of the cost reviews performed. According to EPA officials, the technical contractors did not have sufficient staff on site to provide daily cost reports, and the site administrative officer, therefore, did not review the daily costs of the technical contracts at the Capitol Hill site. EPA requires reviews of the monthly cost reports from technical contractors before they are approved for payment by project officers in the regions; the reviews are generally performed by the on-scene coordinator at the site. However, we could not determine the extent to which the costs of the largest technical contract, which was managed by Region III, were reviewed by on-scene coordinators at the Capitol Hill site because the project officer responsible had retired, and EPA staff could not locate any documentation of reviews that had been requested or performed. As discussed further below, Region III implemented a new review process in 2002 that requires such documentation. EPA conducted four assessments that either focused on or included the Capitol Hill anthrax cleanup; the reports resulting from each follow: Regional Lessons Learned from the Capitol Hill Anthrax Response, 60-Day Counter-Terrorism Contracting Assessment Final Report, Federal On-Scene Coordinator’s After Action Report for the Capitol Hill Site, August 2002; and Challenges Faced During the Environmental Protection Agency’s Response to Anthrax and Recommendations for Enhancing Response Capabilities: A Lessons Learned Report, September 2002. One of these reviews, the 60-day counter-terrorism contracting assessment report, focused exclusively on the capability of EPA’s existing emergency response contracting network to respond to terrorist incidents, while the other three addressed a range of issues, such as operations and management, communications and coordination, health and safety, and the resources available to EPA. The overarching purpose of the four reviews was to derive lessons learned from EPA’s responses to the anthrax incidents in order to improve the agency’s ability to handle the kind of threats associated with large terrorist incidents. In this regard, while EPA concluded the cleanup was a success because the anthrax on Capitol Hill was removed efficiently and safely in the face of numerous and unprecedented challenges, the reports include a wide range of recommendations aimed at improving EPA’s response capabilities. Regarding contracting, the four reviews found that the agency’s emergency response contracting network met the response and procurement needs at the Capitol Hill site, but they also identified suggestions or recommendations for EPA to improve contract support for potential future responses. The lessons learned and recommendations included in the counter-terrorism contracting assessment report generally address the contracting issues that were identified in the broader reviews as well. The counter-terrorism contracting assessment report developed 13 recommendations, 9 of which it identified as the most urgent. These high- priority recommendations include the following: Facilitate counter-terrorism equipment acquisition and maintenance by compiling a national vendor database of sources of counter-terrorism equipment, supplies, and services. Create a strike team of headquarters and regional contracting officers and project officers that will be available for deployment 24/7 in the event of an emergency to assist with emergency procurement needs. Increase the administrative support provided to on-scene coordinators during a major terrorism-related response by, for example, providing staff to review daily cost reports, review invoices, and process on-site paperwork. According to its April 21, 2003, status report of emergency response contracting activities, EPA has completed or is currently taking steps to address the contracting recommendations in the counter-terrorism contracting report. Regarding the three recommendations discussed above, EPA has done the following: EPA has developed counter-terrorism equipment warehouse contracts for most of its regions. EPA developed a final draft document on establishing a national contract support team and released it within EPA for review on April 18, 2003. The workgroup addressing the need for administrative support for on- scene coordinators is working on a list of specific administrative support tasks that are required. The next section of this report discusses some other areas in which EPA’s contracting oversight might be improved that we identified during our review of the Capitol Hill anthrax cleanup. As a result of the convergence of EPA staff from nine of its regions at the Capitol Hill site, regional differences in contractor oversight were highlighted. Three oversight differences concern contract cost data and the review of these costs. First, regions vary in the way they use a computerized contract cost-tracking system called the Removal Cost Management System. All regions use the system for removal contracts; however, some regions also use it for some technical contracts also used at cleanup sites. Second, some regions require that invoice reviews be documented before payments are made; other regions have no such requirement. Third, regarding cost reviews, some regions hire administrative specialists to conduct detailed daily on-site reviews of contract costs in support of the on-scene coordinator, while others only rely on the on-scene coordinator to both manage cleanups and review and approve the contract costs. In 1988, to better support Superfund program management, EPA developed a computerized cost-tracking system for cleanups so the agency could obtain consistent documentation from contractors at all sites in a timely and efficient manner. Specific anticipated benefits included timely tracking of total costs to ensure that cleanup projects would not exceed authorized amounts, more efficient invoice verification, and the ability to develop more accurate cost estimates for cleanups. The tracking system provides up-to-date cost information organized under the main categories of “personnel,” “equipment,” and “other field costs;” the system further breaks “other field costs” into such subcategories as materials and supplies, travel, lodging, per diem, and subcontracts. Thus, to the extent that regions require contractors to input daily contract costs into the system, EPA can readily monitor total costs as well as individual cost categories on a daily basis. Daily cost information supports oversight better than monthly information because it allows timely, on-site reviews of costs that can uncover inefficient or excessive use of labor and equipment. While a 1989 memorandum requiring the use of the tracking system indicated that all site costs were to be input into the system, generally only the costs associated with removal contracts are entered daily into the system. For example, on the Capitol Hill anthrax cleanup, the expenditures ($10.2 million) for the four multi-year removal contracts were input into the system, but the expenditures ($7 million) for the four multi-year technical contracts were not. According to EPA officials, part of the rationale for inputting removal contract costs into the system is that the type of contract used—“time and materials” contracts—requires more oversight than some other contract types, such as fixed-price contracts. That is, the removal contracts provide for specific labor rates but do not specify the number of hours that may be applied under the contracts. Most of the technical contracts currently used by the regions are cost reimbursement contracts and a few are fixed-priced contracts. Further, the fixed-priced contracts used by the regions will include a cost reimbursement portion that may cover activities such as contractor travel and subcontracts, according to a Region III contract official. For example, the cost reimbursement portion of one of the fixed-price technical contracts used for the Capitol Hill anthrax cleanup was substantial—about half of the contract cost of $4.4 million was invoiced under the cost reimbursement portion, according to a Region III contract official. As with work performed on a time-and-materials basis, cost-reimbursement work requires appropriate surveillance during performance to provide reasonable assurance that efficient methods and effective cost controls are used. In addition, the technical contracts support work at numerous cleanup sites, and EPA also needs to track site-specific costs as well as total contract costs. However, because EPA does not consistently use the contractor cost-tracking system to track the costs incurred under its technical contracts, complete and consistent cost data on specific cleanup sites are not readily available. Although EPA generally does not use the tracking system for technical contract costs, individual on-scene coordinators in some regions have required that these costs, as well as others, such as those incurred by state and federal agencies, be entered into the system. According to two such on-scene coordinators with whom we spoke, a key benefit of using the tracking system is that it gives them timely information on costs which helps them oversee and manage the work. According to an environmental engineer with EPA’s Environmental Response Team, the benefits of using the tracking system for all of the contracts would include having consistent cost data about each cleanup site in one place, thereby enabling the agency to quickly respond to the numerous site-specific questions frequently asked by EPA management, the Congress, the Office of Management and Budget, the Federal Emergency Management Agency, and others. For example, using the tracking system one can quickly break out the expenditures into individual cost categories. The four Capitol Hill contracts entered into the tracking system include, in the aggregate, personnel costs of $2.8 million, lodging costs of $1.6 million, and per diem costs of $0.6 million. Using the tracking system, analyses of contract cost categories can be performed on individual contracts and individual sites. However, because technical contracts generally are not included in the tracking system, information on individual cost categories for the entire cleanup is incomplete. EPA’s Contracts Management Manual describes responsibilities and procedures for processing contractors’ invoices. Contract invoices are to be reviewed thoroughly for cost reasonableness and to be processed in a timely manner. While the guidance may be tailored to specific contracts and the use of checklists is optional, EPA’s policy requires documentation to show that the appropriate reviews have been performed. The manual defines the roles of the various staff involved in reviewing and approving invoices. Among the key personnel in this process are the EPA staff who oversee the actual contract work—primarily on-scene coordinators in the case of the Capitol Hill anthrax site—and the project officer. In general, the staff who oversee the work are responsible for reviewing individual contract costs for reasonableness and informing the project officers of any problems with the costs, such as excess hours charged. The project officers are responsible for reviewing contract invoices for payment and completing and submitting invoice approval forms to EPA’s financial management center for payment. The contract invoices for the removal and technical contracts are typically highly detailed and presented in varying formats. Invoice reviews for removal contracts are generally more standardized across EPA than the invoice reviews for the technical contracts. Regions use varying invoice review approaches for the technical contracts. For example, beginning in November 2002, EPA Region III established a new process for reviewing invoices of technical contracts: the relevant EPA staff who oversaw or are overseeing the work at the sites receive monthly site-specific invoices from contractors, and the EPA staff are required to provide a written statement to the EPA project officer either indicating agreement with the costs or identifying questions about them. Region III revised its invoice review process after a new project officer with prior auditing experience was hired. This individual proposed the change to better ensure that invoices were reviewed by the on-site person familiar with the work that was performed—such as the on-scene coordinator— and that the review was documented before invoices were paid. Similarly, Regions V and IX send forms requiring responses to questions about the invoices, along with the monthly invoices, and require the work assignment managers overseeing the contract work to return the completed forms to the project officers. However, before this change, and during the Capitol Hill anthrax cleanup, Region III did not require written certification of invoice reviews. Region III’s earlier approach is similar to the one currently used in Region IV, where the project officer sends monthly invoices to the EPA work assignment managers for review and asks them to respond if they have concerns. Lacking a response from an EPA work assignment manager, the project officer approves the invoice for payment after a specified date. In these cases, the agency does not have documentation of the appropriate invoice reviews by the EPA staff who oversaw the contract work. Another variation is used in Region X: the project officer approves the monthly invoices without providing the EPA work assignment manager the opportunity to review them for reasonableness. As a result, the review is performed by an individual who did not oversee the work rather than by on-site staff who know the specifics of the work performed. EPA’s on-scene coordinators generally are responsible for managing all aspects of emergency environmental cleanups: organizing, directing, and documenting cleanup actions. Specific tasks include conducting field investigations, monitoring on-scene activities, and overseeing the cleanup actions. The on-scene coordinator is also the individual with primary responsibility for ensuring that cleanup costs are managed and tracked as the cleanup progresses. The cost reviews that are required to ensure that EPA approves only reasonable and allowable costs are detailed and time- consuming. An EPA cost management principle for the Superfund program is that costs can be managed and documented most effectively from the cleanup site as they occur. However, EPA’s Removal Cost Management Manual recognizes that the demands on the on-scene coordinator’s time and attention are great and that, therefore, some cost management responsibilities have to be delegated to other on-site or off- site personnel. To address this workload issue, Region III established an administrative position to provide on-site cost management support to its on-scene coordinators. As discussed earlier, one of Region III’s site administrative officers worked on site at the Capitol Hill anthrax cleanup, supporting the lead on-scene coordinator essentially full-time from October 2001 through April 2002 and part-time for several more months. As a result, the daily costs for four removal contracts were examined, contractor hours were traced back to sign-in sheets, and equipment deliveries and uses confirmed. The lead on-scene coordinator could not have conducted these detailed cost reviews because of other demands, and the other on-scene coordinators on site (many of whom were assigned to the site for only several weeks) also were involved overseeing the work being performed and would not have been able to conduct timely, detailed cost reviews. Also, as discussed above, one of the lessons EPA learned from its assessments of its responses to the recent terrorist attacks, including the anthrax incidents, is that the agency needs to provide more administrative support to its on-scene coordinators who are responding to threats associated with terrorist incidents. The 60-Day Counter-Terrorism Contracting Assessment Final Report specifically said that on-scene coordinators need increased support to review daily cost reports and invoices and to process paperwork on-site. Although EPA’s Region III provides cost management support to its on-scene coordinators on a routine basis, most of the regions do not have positions dedicated to assist on-scene coordinators with their cost management responsibilities and, therefore, do not have trained support staff readily available to augment large or complex emergency cleanup efforts. Region III, which was responsible for the contracting for the Capitol Hill anthrax cleanup, has three such positions and was able to provide a site administrative officer to perform detailed cost reviews of removal contracts at the Capitol Hill site. Region II also has three similar positions. Five other regions we contacted do not have a similar position. People in or near the contaminated Capitol Hill buildings could have been harmed by anthrax that was not successfully removed or by a release of the chemicals used to decontaminate the buildings. For example, the decontaminant used in the fumigation cleanup method—chlorine dioxide gas—may irritate the respiratory tract at low concentrations and is fatal at high concentrations. In many cases, contractors can obtain pollution liability insurance to cover harm to third parties that may arise from cleanup activities; in other cases, the cost of such insurance may be prohibitive. In the case of the Capitol Hill anthrax cleanup, two contractors with key roles in the fumigation of the Hart Senate Office Building informed EPA that they were not able to obtain such insurance at a reasonable cost, and they requested indemnification. As discussed below, EPA agreed to provide the indemnification authorized by CERCLA to the two contractors, protecting them from the financial liability that could result if a third party were injured by the contractors’ release of a harmful substance, including anthrax. For example, numerous uncertainties about the use of chlorine dioxide gas for this task existed, and IT Corporation—which was tasked to fumigate the Hart office building using chlorine dioxide gas—would not start removal procedures without receiving indemnification from EPA against liability for damages. According to EPA officials, chlorine dioxide had not been used previously for removing anthrax or for fumigating such a large area. After EPA determined that IT Corporation and three of its subcontractors supplying the fumigation chemicals and technologies had diligently sought insurance and none was available at a reasonable price, in November 2001, the agency agreed to provide them with indemnification. Specifically, EPA agreed to compensate IT Corporation and its three subcontractors up to $90 million if they were deemed liable for damages caused by a negligent release of a hazardous substance, pollutant, or contaminant, including but not limited to anthrax and chlorine dioxide. According to EPA officials, the negotiations for the indemnification agreement were completed in about 4 weeks. The indemnification does not cover liability for intentional misconduct or gross negligence. It appears that the cleanup was handled without harmful incidents occurring. According to EPA officials, neither IT Corporation nor the subcontractors have sought compensation under the indemnification agreement. In December 2001, after the agreement with IT Corporation was in place, another contractor supporting the fumigation requested and obtained indemnification. CDM Federal Programs Corporation (CDM), whose responsibilities included placing the materials to test for the presence of anthrax during fumigation, received indemnification terms similar to those granted IT Corporation but with significantly lower compensation amounts. Specifically, EPA agreed to compensate CDM up to $1 million if it were deemed liable for damages caused by a negligent release of a hazardous substance, pollutant, or contaminant, including but not limited to anthrax. This indemnification also does not extend to liability arising from intentional misconduct or gross negligence. Negotiations for this agreement built on the previously negotiated agreement with IT Corporation, and, according to EPA officials, were accomplished in about a week. CDM was already working at the site when it requested indemnification and continued to work while the negotiations were in process. Although IT Corporation required that an indemnification agreement be in place before it would begin the decontamination of the Hart building, the cleanup itself was not delayed because other issues needed to be resolved before IT Corporation started the fumigation process. For example, tests had to be conducted and then reviewed by EPA, the Capitol Police Board, and others to confirm that chlorine dioxide had the antimicrobial properties to effectively destroy anthrax. By the time open issues were resolved and the decontamination could begin, EPA had reached its agreement with IT Corporation and its subcontractors. However, in other emergency cleanups, such negotiations could delay the start of decontamination work. In this regard, EPA has concluded that in the future, a more expedient way to indemnify contractors for emergency situations such as anthrax incidents needs to be in place to prevent delays. In fact, two of the EPA reviews of its responses to the anthrax incidents recommended that EPA take steps to expand contractor liability indemnification to address counter-terrorism response activities. Once Subtitle G of the recently enacted Homeland Security Act of 2002 is fully implemented, agency officials believe that their emergency response contractors will face little or no legal liability to injured third parties if the contractors use qualified antiterrorism technologies previously approved by the Secretary of Homeland Security. According to an EPA official, if this act had been in effect at the time of the anthrax cleanup, and the Department of Homeland Security had approved the chlorine dioxide technology, the contractor would not have needed any indemnification protection. In about 3 months and without harm to emergency response workers or congressional staff, EPA, the Capitol Police Board, and others planned and successfully conducted the first cleanup of office buildings contaminated by a lethal form of anthrax that had caused several deaths elsewhere. Moreover, EPA has taken the initiative to study its response actions to better prepare itself for other emergency cleanups, including other potential terrorism attacks, and has identified areas in which it could improve. Despite the success of the cleanup, our review identified certain inconsistencies in EPA’s contractor cost oversight that may affect its quality. First, regarding tracking contract costs, because few regions use the cost-tracking system for technical as well as removal contracts, EPA does not have readily accessible, consistent contracting data on its cleanup sites. One result of this lack is that the agency was unable to readily respond to your questions about the costs of this cleanup, including the categories of expenditures—how much was spent on personnel, travel, equipment, and so on. In addition, EPA has less assurance that it is providing effective, consistent oversight of its contracts. Second, because EPA has not ensured that all of its regions document the reviews of contractor invoices conducted by cognizant on- site officials, the agency’s ability to ensure that contractors’ charges are accurate and reasonable is lessened. Finally, on-scene coordinators face many competing demands; therefore, their reviews of costs may be less timely than those that can be provided by a specialist working on site to support the on-scene coordinators’ cost reviews. Such administrative support could provide EPA with better assurance that its payments to contractors are appropriately reviewed and adjusted on a routine basis. It could also be readily called upon to conduct these cost reviews during large and complex emergency cleanups, such as those that may stem from terrorism. To enhance its ability to ensure that the agency is providing effective and efficient contractor oversight, we recommend that the Administrator of EPA direct the Office of Solid Waste and Emergency Response to require the regions to track and monitor both technical and removal contract cost data in the agency’s computerized cost-tracking system and the on-site staff who are responsible for reviewing contractor cleanup costs to certify that they have done so before the costs are approved for payment. In addition, we recommend that the Administrator direct the Office of Solid Waste and Emergency Response to examine whether more or all of the regions should hire specialists—either EPA or contractor staff—to support the on-scene coordinators by providing timely, detailed reviews of contract costs. If EPA uses contractor staff for this purpose, the agency will need to provide appropriate contract oversight and ensure that potential conflicts of interest are identified and mitigated. We provided copies of our draft report to EPA for review and comment. In commenting on the draft, the Director of the Contract Management Center in the Office of Emergency and Remedial Response, Office of Solid Waste and Emergency Response, agreed to (1) consider adding the technical contracts to the computerized cost-tracking system as the agency awards the next round of these multiyear contracts and (2) ensure all regions coordinate with on-site staff for invoice reviews prior to approval. The Director also said that EPA is currently examining providing additional administrative support at cleanup sites and is considering using contractor support when in-house positions are not available. One of the considerations the Director of the Contract Management Center cited regarding the inclusion of the technical contracts in the cost-tracking system is that reengineering the system to fit the different types of technical contracts that EPA uses might involve a considerable expense for the agency. Further, while she acknowledged that the cost tracking system may be particularly applicable when the technical contractors are involved in removal (cleanup) activities, she said the additional cost of using the system may not be justified in some cases, such as for finite work performed under a negotiated work plan or a fixed level of effort. However, we believe reengineering costs may not be a barrier to using the system for both technical and removal contracts. Specifically, the system is already being used to track the costs of some of EPA’s technical contracts. Further, an EPA environmental engineer with extensive experience working with the tracking system told us that changes to the system would not be required to add technical contracts. In addition, effective oversight of both time-and-materials work and cost- reimbursement work is essential to ensure costs are reasonable and accurate. However, currently the tracking system is used to support the on-site review of the time-and-materials work done under the removal contracts but not for the contract-reimbursement work done under the technical contracts. We believe that the existing tracking system offers EPA an economical vehicle for enhancing both its contracting data and its contractor oversight by including the technical contracts in the cost tracking system as was envisioned when the system was developed. Regarding our recommendation that the on-site staff responsible for reviewing contractor invoices certify that they have done so before the costs are approved for payment, the Director agreed to require all EPA regions to coordinate their invoice reviews with the on-site staff before approving invoices for payment. If EPA requires the reviewers in all the regions to certify their invoice reviews—as we recommend and as some EPA regions currently do—the agency will be fully responsive to our recommendation. Such a requirement will provide greater assurance that the invoices EPA approves are accurate and reasonable. EPA told us that it is currently examining the issue of additional administrative support at cleanup sites by either EPA staff or contractors, and we have revised our recommendation to take into account concerns that would arise if EPA delegated its contract cost review function to contractors. EPA agreed that the information the report provides on the indemnification agreements that the agency negotiated with two contractors is accurate but suggested that the report also discuss the limitations of the indemnification that EPA can provide under CERCLA. As our report accurately addresses the extent to which EPA agreed to indemnify contractors against liability for potential damages related to the cleanup, we believe that a broader discussion of indemnification issues is not necessary. To determine the costs to EPA of removing anthrax from the Capitol Hill site, we obtained and reviewed cost information from the agency’s Office of the Chief Financial Officer. We discussed cleanup estimates and contract costs for the Capitol Hill anthrax site with EPA financial and contract staff. We also obtained detailed cost information on four of EPA’s removal contracts that was available from EPA’s Removal Cost Management System, the database that tracks costs by site and cost categories. We were not able to obtain this level of detailed cost information for all contractors because EPA does not use this database for all the contractors who work at cleanup sites. To determine how EPA’s costs for the cleanup were funded, we reviewed relevant EPA financial documentation and appropriations legislation that reimbursed the agency’s Superfund program for expenditures associated with the resources used on the cleanup. We did not validate or verify these data. To determine the extent to which the contracts used at the Capitol Hill anthrax site were competitively awarded, we reviewed EPA regional contract documents and discussed the competitive contract process EPA used with agency contract officials. We obtained and reviewed EPA noncompetitively awarded contract documents and the regulations that the agency is required to follow to justify awarding such contracts. We reviewed contracts and agency reports to identify the roles and tasks of the contractors that participated in the Capitol Hill anthrax cleanup and discussed specific contract roles and tasks with EPA officials who were responsible for the cleanup. To describe the extent to which EPA oversaw contractors’ work on the Capitol Hill anthrax cleanup to ensure it was done appropriately and the charges were reasonable, we interviewed Region III contract officials and the site administrative officer who oversaw four contracts during the cleanup. We also examined documentation of the oversight provided by reviewing Capitol Hill site contracting files. We reviewed documentation of, and talked with agency officials about, the current contract oversight practices EPA uses, including staff responsibilities for cost oversight and the use of the contractor cost tracking system. In addition, in part because of delays in obtaining contract information, we surveyed the 63 EPA personnel whom the agency identified as having provided contractor oversight to obtain information on their roles in overseeing the contractors’ cleanup work for the Capitol Hill anthrax site. Using a Web- based survey, we received responses from 56 individuals, a response rate of 89 percent. We also interviewed nine EPA personnel who the survey identified as having spent considerable time at the cleanup site performing contract oversight. In addition, we reviewed four EPA assessments that either focused on or included the Capitol Hill anthrax cleanup and that identified contract oversight issues and recommendations. We obtained information on actions EPA has taken or is taking to respond to the recommendations addressing contracting issues. To describe EPA’s indemnification of contractors against liability for potential damages, we reviewed CERCLA provisions and EPA guidance governing indemnity authority, as well as contract modifications regarding indemnification that EPA made to two contracts used for the Capitol Hill anthrax cleanup. We also discussed with EPA officials how the indemnification process affected the Capitol Hill anthrax cleanup. We conducted our review from June 2002 through May 2003 in accordance with generally accepted government auditing standards. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 14 days after the report date. At that time, we will send copies of this report to the Administrator of EPA and other interested parties. We will make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions, please call me at (202) 512-3841. Key contributors to this report are listed in appendix II. Prepare buildings for decontamination. Conduct and support decontamination operations, including fumigation with chlorine dioxide gas. Decontaminate interior surfaces of buildings, other structures, cars, and other vessels. Provide for collection, containment, and transportation and disposal of contaminated materials from the site operations. Provide support to EPA sampling teams and other federal responders, including response technicians, to assist with decontamination activities. Provide the on-scene coordinator and incident commander fumigation design procedures, including details on fumigant delivery; concentration; operating conditions, such as temperature and humidity; fumigant containment and recovery; and monitoring of parameters. Provide detailed design for delivery of fumigant, equipment requirements and specifications, flow schematics, and detailed schedules and operating procedures to use during fumigation. Provide a chlorine dioxide specialist to assist EPA in overseeing the fumigation setup. Provide technical support to the on-scene coordinator in developing chronology of events at the site, including researching various files, documents, and logbooks in order to develop a comprehensive report. Monitor and assist with the oversight of the chlorine dioxide fumigation process. Assist with health and safety matters at the site, conduct sampling, assist and oversee off-gassing, inventory, and return items being treated. Support the on-scene coordinator in conducting presentations and briefings related to post-treatment and design of chlorine dioxide use in the heating, ventilation, and air-conditioning system. Sample a small number of critical items (plastic, leather, and polyester) for ethylene oxide and its derivations to determine how the ethylene oxide and its derivatives are maintained in the materials and off-gas over time. Provide decontamination services and other direct support to sampling teams. Decontaminate interior surfaces of buildings, other structures, and interior and exterior surfaces of cars and other vessels identified by the on-scene coordinator. Collect all expended cleaning agents and materials for treatment and/or disposal. Provide decontamination facilities and services for response personnel and their equipment. Inventory items—segregating clean and contaminated materials and salvageable and expendable items—and provide documentation of inventoried items. Propose a decontamination strategy for critical items (including personal items such as photographs, framed diplomas, and equipment). Decontaminate critical and salvageable items from the Capitol Complex, including setting up work zones for items to be decontaminated and for personnel decontamination. Return property after decontamination. Provide contamination reduction and isolation facilities and operations that improve and ensure safe access to contaminated areas and items and prevent further spread of contamination. Provide personnel and equipment, including portable decontamination facility. Collect expended cleaning agents and materials for treatment and/or disposal. Dispose of materials or items that could not be decontaminated. Oversee preparation, handling, placement, and collection of spore strips used during fumigation with chlorine dioxide gas and ethylene oxide gas. Develop a procedure for spore strip emplacement; removal; and critical item tagging, tracking, and shipping. Provide sampling such as swipe and high efficiency particulate air (HEPA) vacuum (including efforts to collect, prepare, and ship samples), item decontamination, and minor remediation work. Support critical item degassing activities in Beltsville, Maryland. Maintain critical item inventories and coordinate the release and return of critical items to congressional staffers. Support chlorine dioxide decontamination of congressional mail packages. Develop various documents/plans to be used during the response activities (e.g., standard operating procedures for sampling, decontamination, source reduction). Provide reconnaissance, photo documentation, and sampling of congressional office buildings. Provide technical support for the selection and implementation of decontamination procedures; building-specific plan development for anthrax remediation, including sampling plans, isolation plans, decontamination plans, and item recovery plans; and sampling support for anthrax analysis using HEPA and wipe sampling techniques; perform oversight of removal crews. Provide swab and HEPA sampling and decontamination support. Provide bag-and-tag operations of critical and salvageable items in congressional office buildings. Provide air monitoring operations during chlorine dioxide fumigation operations. Develop sampling and decontamination plans, sample labels and chain-of- custodies, and maps to support sampling activities and to track sampling results. Perform sampling, monitoring, and decontamination of areas in the Capitol Hill complex. Conduct sampling tracking and handling activities, including preparing samples for shipping. Compile and review background data and organize site documentation files. Provide technical support to the operations section and support to the EPA Mobile Lab. Assist in monitoring temperature and relative humidity inside office buildings and in monitoring chlorine dioxide, chlorine, wind speed and direction, temperature and relative humidity in surrounding area. Assist with development and evaluation of anthrax fumigation procedures using spore strips in a test facility and train other contractors in the handling and placement of spore strips in the office building. Provide ambient air monitoring for chlorine dioxide using tape meters and a portable meteorological tower to document that no chlorine dioxide is being emitted from the treatment area. Provide on-site assistance to ensure that spore strip sampling is being conducted properly and that data management is being performed accurately and completely. Assist in the removal of items from the contaminated office suites in the congressional office buildings, including removal of contaminated office furniture, office equipment, and carpet. Construct isolation chambers, decontamination chambers, and other related structures. Provide sampling for anthrax in the Capitol Hill complex. Provide security personnel to staff the single entrance/exit and to patrol perimeter of the storage location used for property removed from U.S. Senate offices during the cleanup to ensure that no unauthorized personnel enter the work area and assure that property items are not removed from the work area without approval of EPA. Provide Porta Count plus respirator fit tester. Perform air sampling and perform HEPA vacuuming services. Remove critical items and documents, spray affected areas with chlorine dioxide, and perform cleaning and breakdown of work zones. Assist EPA in the evaluation of possible remediation of the heating, ventilation, and air-conditioning system, including evaluation of affected areas, and construction of critical barriers inside the ductwork to isolate affected areas from uncontaminated areas. After fumigation of the affected heating, ventilation, and air conditioning system, provide confirmatory sampling support, interior duct sampling, additional cleaning of the system (including post- fumigation scrub down inside the ducts), and removal of duct insulation. Perform cleanup activities, including construction and removal of isolation barriers, HEPA vacuuming operations, and application of liquid chlorine dioxide. Provide 24-hour support for decontamination and rescue operations at the Capitol Hill anthrax site. Provide analysis of spore strips placed in various locations during cleanup operations. Receive and perform daily observations of thousands of spore strips. Participate in and support program plan development relating to spore sterilization technologies for remediation of federal facilities. Develop experimental and field test plans and methodologies for characterization/modeling spore killing processes and kinetics and factors that affect the efficacy of spore killing in field-scale applications. Establish laboratory systems for the measurement of gas phase sporicidal effects at federal office and mail facilities. Provide laboratory analytical support for measurement of gas phase sporicidal effects. Develop experimental and test plans and methodologies for assessing and validating spore killing processes. Determine the concentrations of chlorine dioxide needed to decontaminate anthrax on Capitol Hill. Prepare 31,500 test strips containing a bacillus similar to anthrax and send to Capitol Hill. The exposed strips will be sent to labs and results then will be sent to the University of California, Berkeley, to be included in a consolidated final report. Maintain sample management system software in a private, secure environment on the Internet. Provide EPA personnel and designated contractor personnel secure, controlled access to the database. This system could generate a large variety of reports to address particular questions about sampling results. Provide consulting services to EPA on-scene coordinator in environmental remediation of anthrax-contaminated buildings in the Capitol Hill complex. Support includes data interpretation of the spore strips used to test the efficacy of the kill of anthrax, data validation, review of documents, assistance in document preparation, and report writing. Coordinate efforts with the University of California, Berkeley. Provide equipment that includes biopaks, facemasks, oxygen cylinders, gel tubes, foam scrubbers, coolant canister foam, flow restrictors, and biopak service and retrofit kits. Provide Sandia foam and backpack dispensing units. Provide respirators with battery and cartridge. Provide air purifying respirators. Provide engineering support during the assessment of the feasibility and design of the systems for fumigating air handling return system. Provide training on proper procedures for handling, packaging, and decontaminating artifacts (paintings, sculptures, and other art forms) from the Hart Senate Office Building. Provide self-contained breathing apparatus system. Provide indoor air quality meter. Provide anthrax detection kits. In addition to those named above, Heather Balent, Greg Carroll, Nancy Crothers, Richard Johnson, and Susan Lawes made key contributions to this report.
In September and October 2001, the first cases of anthrax bioterrorism occurred in the United States when letters containing anthrax were mailed to congressional leaders and members of the news media. As the cleanup of the Capitol Hill anthrax site progressed, EPA's estimates of the cleanup costs steadily rose. GAO was asked to describe (1) the costs EPA incurred to conduct the cleanup and how it was funded, (2) the extent to which EPA awarded the cleanup contracts competitively, (3) EPA's oversight of the contractors' work and any suggested changes to EPA's contracting practices, and (4) the extent to which EPA agreed to indemnify contractors against liability for potential damages related to the cleanup. EPA spent about $27 million on the Capitol Hill anthrax cleanup, using funds from its Superfund program. From the outset, many uncertainties were associated with the cleanup effort, including how to remove anthrax from buildings. EPA revised its November 2001 estimate of $5 million several times during the cleanup as the nature and extent of the contamination became fully known and the solutions to remove and properly dispose of the anthrax were agreed upon and carried out. To conduct the cleanup, EPA relied extensively on the existing competitively awarded Superfund contracts it routinely uses to address threats posed by the release of hazardous substances. Specifically, about 80 percent of the contract costs were incurred under 10 of EPA's existing Superfund contracts. EPA dedicated significant resources to overseeing the many contractors working on the Capitol Hill anthrax cleanup--including about 50 staff from nine regional offices experienced in leading and overseeing emergency environmental cleanups. Most often, these staff ensured that the contractors were on site and performing assigned tasks efficiently. EPA also assigned an administrative specialist to ensure that contract charges were accurate and reasonable. EPA's assessment of its emergency responses to the anthrax incidents, which focused on or included the Capitol Hill site, concluded that, overall, the agency had used its contracts effectively but that it could improve some areas of its contracting support. In addition, GAO's review of the Capitol Hill cleanup revealed inconsistencies in EPA's cost oversight practices among regions. For example, EPA uses a computerized system for tracking contractor costs for hazardous substance removal contracts, but regions use the system inconsistently for the technical assessment contracts also used during emergency responses. Consistent use of the system would likely improve the quality of EPA's nationwide contract data and enhance EPA's oversight capabilities. EPA agreed to indemnify two contractors with key roles in the fumigation of the Hart Senate Office Building with chlorine dioxide gas against liability that could have resulted if a third party had been injured by the contractors' release of a harmful substance, including anthrax.
We found that Interior continues to experience problems hiring and retaining sufficient staff to provide oversight and management of oil and gas activities on federal lands and waters. BLM, BOEM, and BSEE office managers we surveyed reported that they continue to find it difficult to fill vacancies for key oil and gas oversight positions, such as petroleum engineers, inspectors, geologists, natural resource specialists, and geophysicists. These managers reported that it was difficult to retain staff to oversee oil and gas activities because staff leave for higher salaries in the private sector. They also reported that high rates of attrition are a concern because some Interior offices have just one or two employees per position, so a single retirement or resignation can significantly affect office operations and oversight. Nearly half of the petroleum engineers that left BLM in fiscal year 2012 resigned rather than retired, suggesting that they sought employment outside the bureau. According to Office of Personnel Management (OPM) data, the fiscal year 2012 attrition rate for petroleum engineers at BLM was over 20 percent, or more than double the average federal attrition rate of 9.1 percent. We found hiring and retention problems were most acute in areas where industry activity is greatest, such as in the Bakken shale play in western North Dakota, because the government is competing there with industry for the same group of geologists and petroleum engineers. Interior officials cited two major factors that affect the agency’s ability to hire and retain sufficient staff to oversee oil and gas activities on federal leases: Higher industry salaries. BLM, BOEM, and BSEE office managers surveyed reported that they have lost potential applicants and staff to industry because it can pay higher salaries. Bureau of Labor Statistics data confirm that there is a wide and growing gap between industry and federal salaries for some positions, particularly petroleum engineers and geologists. For example, from 2002 through 2012, mean federal salaries for petroleum engineers have remained fairly constant at about $90,000 to $100,000 per year whereas private sector salaries have steadily increased from about $120,000 to over $160,000 during this same time period. The lengthy federal hiring process. BLM, BOEM, and BSEE officials surveyed reported that the federal hiring process has affected their ability to fill key oil and gas positions because it is lengthy, with multiple required steps, and that many applicants find other employment before the federal hiring process ends. We analyzed Interior’s hiring data and found that the average hiring time for petroleum engineers was 197 days, or more than 6 months, at BOEM and BSEE. BLM fared a little better; its average hiring time for petroleum engineers was 126 days, or a little more than 4 months. However, all hiring times were much longer than 80 calendar days— OPM’s target. According to BLM, BOEM, and BSEE officials, other factors have contributed to difficulties hiring and retaining key oil and gas oversight personnel, such as few qualified applicants in remote areas, or areas with a high cost of living. Interior and its three bureaus—BLM, BOEM, and BSEE—have taken some steps to address hiring and retention challenges but could do more. Interior has used special salary rates and incentives to increase hiring and retention for key oil and gas positions, but use of these incentives has been limited. Interior has taken some steps to reduce the time it takes to hire oil and gas oversight staff but does not collect data to identify the causes of delays in the hiring process and opportunities for reducing them. Finally, Interior has taken some actions to improve recruiting, such as developing workforce plans to coordinate hiring and retention efforts, but this work is ongoing, and the extent to which these plans will help is uncertain. Special salary rates. For fiscal years 2012 and 2013, Congress approved a special 25 percent base pay increase for geologists, geophysicists, and petroleum engineers at BOEM and BSEE in the Gulf of Mexico. According to Interior officials in the Gulf of Mexico, this special pay authority helped retain some geologists, geophysicists, and petroleum engineers, at least in the near term. BOEM and BSEE requested an extension of this special pay authority though fiscal year 2014. In 2012, BLM met with OPM officials to discuss special salary rates for petroleum engineers and petroleum engineering technicians in western North Dakota and eastern Montana, where the disparity between federal and industry salaries is most acute, according to a BLM official. A BLM official told us that OPM requested that BLM provide more data to support its request. The official also told us that BLM submitted draft language to Congress requesting special salary rates through a congressional appropriation. According to Interior officials, all three bureaus are preparing a department-wide request for special salary rates to submit to OPM. Incentives. BLM, BOEM and BSEE have the authority to pay incentives in the form of recruitment, relocation, and retention awards of up to 25 percent of basic pay, in most circumstances, and for as long as the use of these incentives is justified, in accordance with OPM guidance, such as in the event an employee is likely to leave federal service. However, we found that the bureaus’ use of these incentives has been limited. For example, during fiscal years 2010 through 2012, the three bureaus hired 66 petroleum engineers but awarded just four recruitment incentives, five relocation incentives, and four retention incentives. BLM awarded two of the four retention incentives in 2012 to help retain petroleum engineers in its North Dakota Field Office. OPM data showed that, in 2011, Interior paid about one-third less in incentive awards than it did in 2010. BLM officials cited various factors that contributed to the limited use of incentives, such as limited funds available for incentives. A BLM official also told us that there was confusion about an OPM and Office of Management and Budget (OMB) requirement to limit incentive awards to 2010 levels and that some field office managers were uncertain about the extent to which office managers were allowed to use incentive awards. Without clear guidance outlining when these incentives should be used, and a means to measure their effectiveness, we concluded that Interior will not be able to determine whether it has fully used its authority to offer incentives to hire and retain key oil and gas oversight staff. Hiring times. To improve its hiring times, Interior participated in an OPM- led, government-wide initiative to streamline the federal hiring process. In 2009, a team of hiring managers and human resources specialists from Interior reviewed the department’s hiring process and compared it with OPM’s 80 calendar-day hiring target. The team identified 27 action items to reduce hiring times, such as standardizing position descriptions and reducing the number of managers involved in the process. Interior and its bureaus implemented many of the action items over the past few years and made significant progress to reduce hiring times, according to Interior officials and agency records. For example, BSEE reduced the time to select eligible applicants from 90 to 30 days by limiting the amount of time allowed for managers to review and select applicants. A BLM official told us that the bureau is working to automate vacancy announcements to improve the efficiency of its hiring process. However, neither the department nor the three bureaus have complete and accurate data on hiring times that could help them identify and address the causes of delays in the hiring process. Beginning in 2011, Interior provided quarterly data on hiring times to OPM, calculated based on Interior’s personnel and payroll databases. However, we identified discrepancies in some of the data—for example, in some cases, hiring times were erroneously recorded as 0 or 1 day. In addition, none of the bureaus systematically analyze the data collected. For instance, BSEE and BOEM collect hiring data on a biweekly basis, but officials told us they use the data primarily to track the progress of individual applicants as they move through the hiring process. Likewise, a BLM official stated that the bureau does not systematically analyze data on hiring times. Without reliable data on hiring times, Interior’s bureaus cannot identify how long it takes to complete individual stages in the hiring process or effectively implement changes to expedite the hiring process. Recruiting. BLM, BOEM, and BSEE have taken some steps to improve recruiting. In 2012, BOEM and BSEE contracted with a consulting firm to draft a marketing strategy highlighting the advantages of employment at the bureaus, such as flexible work hours and job security. BOEM and BSEE used this marketing strategy to revise the recruiting information on their external websites and develop recruiting materials such as brochures and job fair displays. According to a BLM workforce strategy planning document, the bureau is considering contracting with a consulting firm to review its recruiting strategy. All three bureaus are also visiting colleges and universities to recruit potential applicants for oil and gas positions, and each has had some success offering student intern positions that may be converted to full-time employment. Workforce planning. Interior is participating in a government-wide initiative led by OPM to identify and address critical skills gaps across the federal government. The effort aims to develop strategies to hire and retain staff possessing targeted skills and address government-wide and department-specific mission-critical occupations and skill gaps. In March 2012, Interior issued a plan providing an overview of workforce planning strategies that it can use to meet emerging workforce needs and skills gaps within constrained budgets. As part of the next phase of this effort, Interior asked its bureaus to develop detailed workforce plans using a standardized model based on best practices used at Interior. Both planning efforts are ongoing, however, so it is too early to assess the effect on Interior’s hiring and retention challenges for key oil and gas positions at this time. BLM, BOEM, and BSEE are developing or implementing workforce plans as well. As we reported in July 2012, BOEM and BSEE did not have strategic workforce plans, and we recommended that the bureaus develop plans to address their hiring and retention challenges.workforce plan, and BOEM officials told us that they expect to complete one in 2014. BLM issued a workforce planning strategy in March 2012 that outlined strategic objectives to address some of its key human capital challenges; however, this strategy does not include implementation; address challenges with the hiring process; or outline mechanisms to monitor, evaluate, or improve the hiring process; so it is too soon to tell whether BLM’s planning strategy will help the bureau address its human capital challenges. Moreover, we found that the bureaus’ efforts do not appear to have been conducted as part of an overarching workforce plan, or in a coordinated and consistent manner, therefore the bureaus do not have a basis to assess the success of these efforts or determine whether and how these efforts should be adjusted over time. The BLM, BOEM, and BSEE officials that we interviewed and surveyed reported that hiring and retention challenges have made it more difficult to carry out their oversight activities. These officials stated that position vacancies have resulted in less time for oversight, and vacancies directly affect the number of oversight activities they can carry out—including the number of inspections conducted and the time for reviewing applications to drill. Officials at some BLM field offices told us that they have not been able to meet their annual inspection and enforcement goals because of vacancies. Of the 20 offices with inspector vacancies that we surveyed, 13 responded that they conducted fewer inspections in 2012 compared with what they would have done if fully staffed, and 9 responded that the thoroughness of inspections was reduced because of vacancies. Of the 21 BLM and BSEE offices with petroleum engineer vacancies, 8 reported that they reviewed fewer applications to drill in 2012 compared with what they would have done if fully staffed. BSEE officials told us that fewer or less-thorough inspections may mean that some offices are less able to ensure operator compliance with applicable laws and regulations and, as a result, there is an increased risk to human health and safety due to a spill or accident. According to a BSEE official, the longer federal inspectors are away from a site, the more likely operators are to deviate from operating in accordance with laws and regulations. Officials at each of the three bureaus cited steps they have taken to address vacancies in key oil and gas positions; specifically, reassigning staff from lower-priority to higher-priority tasks, borrowing staff from other offices, or increasing overtime. However, each of these steps comes at a cost to the agency and is not a sustainable solution. Interior officials told us that moving staff from lower to higher priority work means that the lower priority tasks—many of which are still critical to the bureaus’ missions—are deferred or not conducted, such as processing permits. Likewise, offices that borrow staff from other offices gain the ability to carry out activities, but this comes at a cost to the office that loaned the staff. With regard to overtime, BOEM officials reported that a heavy reliance on overtime was exhausting their staff. BLM and BSEE are developing and implementing risk-based inspection strategies—long recommended by GAO and others—as they work to ensure oversight resources are efficiently and effectively allocated; however, staffing shortfalls and turnover may adversely affect the bureaus’ ability to carry out these new strategies. In 2010, we reported that BLM routinely did not meet its goals for conducting key oil and gas facility inspections, and we recommended that the bureau consider an alternative inspection strategy that allows for the inspection of all wells within a reasonable time frame, given available resources. In response to this recommendation, in fiscal year 2011, BLM implemented a risk- based inspection strategy whereby each field office inspects the highest risk wells first. Similarly, BSEE officials told us that they have contracted with Argonne National Laboratory to help develop a risk-based inspection strategy. In our January 2014 report, to address the hiring challenges we identified, we recommended that Interior explore its bureaus’ expanded use of recruitment, relocation, retention, and other incentives and systematically collect and analyze hiring data. Interior generally agreed with our recommendations. Chairman Lamborn, Ranking Member Holt, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to answer any questions that you may have at this time. If you or your staff members have any questions concerning this testimony, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Other individuals who made key contributions include Christine Kehr, Assistant Director; Mark Braza, Glenn Fischer, Michael Kendix, Michael Krafve, Alison O’Neill, Kiki Theodoropoulos, Barbara Timmerman, and Arvin Wu. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Interior employs a wide range of highly trained specialists and scientists with key skills to oversee oil and gas operations on leased federal lands and waters. GAO and others have reported that Interior has faced challenges hiring and retaining sufficient staff to carry out these responsibilities. In February 2011, GAO added Interior's management of federal oil and gas resources to its list of programs at high risk of fraud, waste, abuse, and mismanagement in part because of Interior's long-standing human capital challenges. This testimony and the January 2014 report on which it is based address (1) the extent to which Interior continues to face challenges hiring and retaining key oil and gas staff and the causes of these challenges, (2) Interior's efforts to address its hiring and retention challenges, and (3) the effects of hiring and retention challenges on Interior's oversight of oil and gas activities. To do this work, GAO surveyed all 44 Interior offices that oversee oil and gas operations, of which 40 responded; analyzed offshore inspection records and other documents; and interviewed agency officials. The Department of the Interior continues to face challenges hiring and retaining staff with key skills needed to manage and oversee oil and gas operations on federal leases. Interior officials noted two major factors that contribute to challenges in hiring and retaining staff: lower salaries and a slow hiring process. In response to GAO's survey, officials from a majority of the offices in the three Interior bureaus that manage oil and gas activities—the Bureau of Land Management (BLM), the Bureau of Ocean Energy Management (BOEM), and the Bureau of Safety and Environmental Enforcement (BSEE)—reported ongoing difficulties filling vacancies, particularly for petroleum engineers and geologists. Many of these officials also reported that retention is an ongoing concern as staff leave for positions in industry. Bureau of Labor Statistics data confirm a wide gap between industry and federal salaries for petroleum engineers and geologists. According to Office of Personnel Management (OPM) data, the fiscal year 2012 attrition rate for petroleum engineers at BLM was over 20 percent, or more than double the average federal attrition rate of 9.1 percent. Field office officials stated that attrition is of concern because some field offices have only a few employees in any given position, and a single separation can significantly affect operations. Additionally, Interior records show that the average time required to hire petroleum engineers and inspectors in recent months generally exceeded 120 calendar days—much longer than OPM's target of 80 calendar days. Interior and the three bureaus—BLM, BOEM, and BSEE—have taken some actions to address their hiring and retention challenges, but they have not fully used their existing authorities to supplement salaries or collect and analyze hiring data to identify the causes of delays in the hiring process. For instance, BLM, BOEM, and BSEE officials said that recruitment, relocation, and retention incentives are key options to help hire and retain staff, but the bureaus' use of these incentives to attract and retain petroleum engineers and inspectors has been limited for various reasons. Moreover, Interior and its bureaus have taken some steps to reduce hiring times, but they do not have complete and accurate data on hiring times. For instance, while BSEE and BOEM collect hiring data on a biweekly basis, the data are used primarily to track the progress of individual applicants as they move through the hiring process. Likewise, a BLM official stated that the bureau does not systematically analyze data on hiring times. Without reliable data on hiring times, Interior's bureaus cannot identify how long it takes to complete individual stages in the hiring process or effectively implement changes to expedite the hiring process. According to BLM, BOEM, and BSEE officials, hiring and retention challenges have made it more difficult to carry out oversight activities in some field offices. For example, many BLM and BSEE officials GAO surveyed reported that vacancies have resulted in a reduction in the number of inspections conducted. As a result of these challenges, bureau officials cited steps they have taken to address vacancies in key positions, such as borrowing staff from other offices or using overtime, but these are not sustainable, long-term solutions. In its January 2014 report, GAO recommended that Interior explore its oil and gas management bureaus' expanded use of recruitment, relocation, retention, and other incentives and systematically collect and analyze hiring data. Interior generally agreed with GAO's recommendations. GAO is not making any new recommendations in this testimony.
The U.S. Commercial Service, within Commerce’s International Trade Administration (ITA), plays a leading role in the federal government’s efforts to encourage and promote U.S. nonagricultural exports. CS was founded in 1980, as overseas commercial work was transferred from the Department of State (State) to CS. The purpose of CS’s export promotion programs is stated through statutory authority. CS’s mission is to maximize U.S. competitiveness, and enable economic growth for U.S. industries, and enhance job creation by helping U.S. firms take advantage of opportunities abroad through a global network of international trade professionals. CS operates 108 domestic offices at U.S. Export Assistance Centers (USEAC), and maintains 124 international offices in 75 countries that represent the significant export markets for U.S. goods and services. CS trade specialists at these offices are tasked with assisting U.S. firms and representing U.S. commercial interests abroad. In those countries where CS does not have a presence, State represents U.S. commercial interests and assists U.S. exporters. State and CS are in the process of negotiating a memorandum of understanding to formalize this arrangement. Currently, State can offer certain export promotion services, but does not use CS product and customer service standards or CS pricing policies. The general goal of CS is to use its network of professionals and export promotion services to broaden and deepen the U.S. exporter base and help U.S. firms make sales in international markets. CS reports that it helps thousands of firms make export sales each year. Furthermore, according to CS, the majority of these sales are by SMEs. Its services include providing market research and supporting trade events. The Gold Key Service, which helps firms identify international business partners, is one of its most popular services. CS provides these services to a variety of customers. While private U.S. firms (particularly SMEs) are CS’s main customers, CS also delivers services to other customers, including state and local governments. Many U.S. states maintain state trade offices that provide varying levels of export assistance, usually focusing on increasing exports from firms located in their states. Most are managed by state economic development agencies and funded by states’ operating budgets. Many state trade offices maintain both domestic and overseas offices to deliver services. In addition, many states’ trade offices partner with CS to ensure client firms have access to services the state cannot offer, particularly in those foreign markets where the states lack offices. CS is authorized to charge a user fee for its export promotion services, and CS has adjusted its user fee structure in recent years. Prior to fiscal year 2005, CS did not have an agencywide user fee schedule for its export promotion programs, as each overseas post decided what user fees firms were charged for the export promotion services the posts delivered. As part of its user fee review in fiscal year 2005, CS sought to determine the user fees it would have to charge to recover the full costs of its services. CS determined its user fees would have to rise significantly to recover full costs, causing concern among firms, business leaders, and CS staff. However, CS did not implement a 2005 user fee schedule that recovered full costs but adopted an agencywide user fee schedule with user fees for most services set to recover a portion of program costs. CS and the majority of states provide many of the same types of export promotion services, such as export training, trade missions, and market research. Firms can choose to go to CS, states’ trade offices, or other providers to get these services. However, states have limited budgets and staff to assist their firms. Partly as a result of this limited capacity, most states reported that CS’s services are important to their export promotion capabilities and have partnered with CS’s offices. Both CS and most states’ trade offices focus their export promotion efforts on SMEs. CS and the majority of states’ trade offices provide services for free but charge fees for certain services. In addition, to facilitate access to CS’s programs, about a third of the states responding to our survey indicated that they provide grants or payments to firms from their states to defray the costs of CS’s fee services. CS offers a range of standardized and customized services to help firms to export. The standardized CS services, including Gold Key Service, International Partner Search, and International Company Profile, are prepared and delivered to firms in approximately the same manner around the world. These services offer firms assistance to identify and meet potential overseas buyers and distributors, and to perform due diligence on prospective foreign buyers. The customized services, including Customized Market Research, QuickTake, seminars/webinars, and trade promotion events and trade missions, are tailored to fit the specific needs of an individual firm in a specific export market and vary based on the scope of work. The majority of the states’ trade offices that responded to our survey provide many of the same types of export promotion services as CS to assist firms interested in exporting. According to CS officials, no state provides services that compare with the depth and extent of CS’s export promotion services. According to SIDO, states’ comparative advantage is their local presence and their ability to specialize in the major industries in their states and on the export markets those industries typically target. Services that most states provide and that are similar to CS include training programs and seminars, as well as assistance in participating in trade shows and missions. Other services states often cited include market research, agent and distributor searches, and foreign company background checks. According to CS officials, CS provides its services to a national client base and provides services in numerous markets where states have little or no presence. For example, CS officials explained that while many states provide “market research” services, these services may cover fewer markets and provide less detail than the market research CS provides. In addition, CS said that its missions target an industry segment more broadly and deeply than is possible for any state. Further, CS provides some services for which there are no state counterparts, such as government-to-government advocacy. Table 1 shows the major export promotion services offered by states’ trade offices and CS. States’ trade offices have small staffs and budgets relative to CS. Consistent with its role as a federal entity promoting U.S. interests abroad, CS has a national and worldwide presence while states have a local presence and operate in fewer countries. CS has 493 domestic and 991 overseas staff who are currently engaged in export promotion activities in 47 states plus Puerto Rico and 75 countries. (See app. II for CS’s domestic and international locations.) The 45 states’ trade offices responding to our survey have a combined total of 275 domestic staff and 214 overseas staff. According to SIDO, states’ trade offices have 245 offices in 34 countries. Half the states responding to our survey have five or fewer full-time domestic staff and two or fewer full-time overseas staff. Table 2 shows the differences among the states in terms of the resources they devote to export promotion activities. Both CS and the states’ trade offices have been experiencing reductions in their staffing levels. Based on CS’s data, it has experienced a 5 percent reduction in domestic staff and 3.5 percent reduction in overseas staff from fiscal year 2007 to 2008. Thirty-two states’ trade offices, or almost 73 percent of those that had a basis to judge, said that their overall staffing level has decreased or stayed the same over the past 5 years. While reliable trade promotion budget data are not available for the 50 states’ trade offices, sources estimate that the combined annual trade budget of the 50 states is significantly lower than CS’s annual budget, perhaps less than half. CS’s total budget for export promotion was about $235 million in fiscal year 2008 and is projected at $237.7 million for fiscal year 2009 (less than 1 percent increase in nominal terms.) Information about states’ export promotion budgets is difficult to obtain and may not be fully reliable. States’ commerce departments or economic development agencies usually run states’ trade promotion programs and foreign investment recruitment programs, and some states do not disaggregate the budget data between the two functions. CSG estimates that the 50 states spend a combined total of about $100 million each year helping state businesses create jobs at home by selling products abroad. SIDO estimates that states’ budget for both trade and foreign investment recruitment was about $103 million in 2008. Current state trade budget data are only available for 27 states through SIDO’s survey. Based on these data, the average state export promotion budget was $1.4 million in 2008, and the median was $775,000, ranging from Pennsylvania at about $10 million to Vermont at about $170,000. (Also see table 2.) In responding to our survey, some states’ trade offices made a variety of observations regarding leveraging resources between the states and CS and the limited resources available for export promotion programs. For example, one state said that budget cuts have resulted in its decision not to duplicate services offered by CS. Another state that has recently discontinued its export promotion programs expressed interest in having the USEAC colocate within its economic development agency while another state said colocation costs incurred by individual states could be offset by discounted fees for CS services. With regard to CS’s overseas offices, some states and SIDO noted that CS is closing offices in developed countries (and shifting resources to developing countries), and this leaves established global markets for SMEs without CS presence in some cases. SIDO is concerned about what they believe are the small number of CS resources available for export promotion programs. SIDO believes that U.S. firms are at a competitive disadvantage compared with firms in other competitor countries with larger government export promotion budgets and has called for a 50 percent increase in CS’s budget. States’ trade offices collaborate with CS to help provide firms export promotion services, and some states’ domestic offices are colocated with CS’s staff at a USEAC with a goal of helping firms access CS services. More than three-quarters of states’ trade offices (36 of 43) that had a basis to judge viewed Commerce’s services to be very or moderately important to their states’ export promotion capabilities. (See fig. 1.) According to states’ trade offices we visited, as well as SIDO, most states rely on or partner with CS to obtain export assistance in overseas markets where the states have no representation. Where the states have representation, they rely on their own services to assist their exporters. Activities in which states’ trade offices partnered most with CS included trade shows and trade missions, seminars, training programs, conferences, and event planning. In addition to partnering with CS, some states’ trade offices also reported working closely with their local USEACs. For example, one state named its USEAC a “key partner” with which “all export related programs, seminars, and conferences are planned, coordinated, and implemented.” Another state said that its USEAC serves on the state’s committee, helping to select the winners of the Governor’s Awards for Excellence in exporting. In addition, 11 states’ trade offices are colocated with USEACs. According to officials of one state trade office colocated with a USEAC, colocation has helped the state partner with CS to provide services to firms and outreach to potential client firms. Just over three-quarters of states’ respondents that had a basis to judge (34 of 42) reported that they have partnered with CS on a variety of activities that are not part of CS’s formal services, which requires a signed cooperation agreement. A few states cited frequently working with CS, while the majority of respondents identified only a few activities they have conducted jointly with CS during the last 3 years. For example, some states we visited informed us that they have conducted joint company visits and counseling sessions with their local USEACs. Some state trade offices work with CS to facilitate state-sponsored trade missions and trade events and are sometimes customers of CS. Our survey revealed that some states’ trade offices directly purchased some of CS’s services during the last 3 years. Figure 2 indicates the services states’ trade offices have purchased directly from CS. Gold Key Services and seminars/webinars were the services states’ trade offices most often reported purchasing directly from CS. Half of the states’ trade offices reported purchasing Gold Key Service from CS, and less than half reported purchasing seminar/webinar services directly from CS. Three states we visited reported purchasing Gold Key services directly from CS to support overseas trade missions. One state said that it has purchased CS’s Gold Key Service to identify potential consultants and representatives overseas and another state said that it has purchased Gold Key Service to complement a state-led trade mission. In responding to our survey, some states’ trade offices said that the collaboration between them and CS can be improved to provide greater benefits to their client firms. For example, states and CS target the same client base within their states, and some states’ trade offices and SIDO said that improved information sharing would greatly increase effectiveness and reduce duplication of efforts to the benefit of exporting SMEs if sharing of client contacts and client needs were allowed. The types of information they sought included USEAC offices and staff goals and CS’s foreign national contractors’ list. Regarding sharing client information, CS officials said they must adhere to federal regulations, which prohibit the sharing of business proprietary information with non- U.S. government agencies. Some states’ trade offices said that they have partnered with other states’ trade offices, chambers of commerce, world trade centers, universities, and other entities to share costs for export promotion services. For example, some states said that they have obtained sponsors from both the public and private sectors to cover some of their costs, such as governor- led trade missions and agricultural exports. In addition, other states said that they shared costs with several state entities and organizations to cover programs, such as export training seminars, conferences, and forums while other states said that they have shared the costs of domestic and overseas trade offices or contractors with others. CS and most states’ trade offices focus their export promotion efforts on SMEs. According to CS, the majority of its customers are SMEs. Similarly, most state trade offices focus their export promotion efforts on SMEs, with 32 of the 42 states responding to our survey question reporting that more than three-quarters of their budgets were used to target the needs of SMEs. In addition, approximately 79 percent of the 33 states’ trade offices that responded to SIDO’s 2008 survey of states’ trade offices reported SME manufacturing firms comprised the primary customers for their export promotion services, and approximately 18 percent considered very small manufacturing firms (50 employees or less) their most important customers. According to CS, most of its services are sold to SMEs, with about 78 percent sold to SMEs in fiscal year 2008. As an incentive for SMEs to purchase their services, CS charges them less than large firms for its standardized and customized services. In May 2008, CS implemented its current cost-based user fee schedule, which charges SMEs only a proportion of the fees large firms pay for the same services. The 2008 user fee schedule introduced a reduced one-time incentive user fee to new-to- export (NTE) SMEs using CS standardized services. Also, under the 2008 user fee schedule, CS extends to states’ trade offices the SME user fee rates for standardized and customized services when purchasing CS services for their own use. CS currently offers firms five standardized and nine customized services. CS’s standardized export promotion services have fixed user fees, while the user fees for CS’s customized services vary based on the scope of service provided. In addition, according to CS, a significant amount of trade specialists’ time is spent providing export counseling, advocacy, and generic market research, for which CS does not charge user fees. The user fee schedule CS implemented in May 2008 replaced CS’s user fee schedule implemented in 2005. The 2008 user fee schedule raised the user fees for large firms, while maintaining the level of user fees for SMEs. Figure 3 compares CS’s 2005 and 2008 user fee schedules for CS standardized and customized services. CS’s data show that it sold a total of 19,906 services in fiscal year 2008. In addition, CS reports that it collected approximately $10.2 million for these standardized and customized services. Table 3 shows the number of services sold and collections by type of service for fiscal year 2008. A majority of states do not charge fees for most types of services they offer, and they provide some services for free for which CS charges a fee. At least 23 states responding to our survey do not charge fees for 7 of the 11 types of services, including export counseling, market research, market entry strategy development, product analysis, and pricing information. (See fig. 4.) In contrast, CS charges a fee for similar services in some cases. For example, while CS charges user fees for agent and distributor searches under its Gold Key Service and International Partner Search services, most states reported charging no fee for this type of service. In addition, most of the six states’ trade offices we visited told us they provide free export promotion services for which CS charges a fee, such as foreign company background checks and foreign agent and distributor searches. However, the scope and coverage of states’ services compared with CS’s may differ. For example, some state trade office officials told us that while they provide for free services similar to CS’s fee-based services, often these services are available only in limited overseas markets and are not as comprehensive as the services CS provides. For those services for which states’ trade offices reported charging a fee, most states charge partial fees rather than full fees. Most states reported charging a fee for trade shows, foreign trade missions, as well as training programs and seminars. Most states reported charging a partial fee to recover part of the cost of these services. For example, for foreign trade missions, 28 states charge a partial fee, and 8 states charge a full fee. We did not request information on the fees states’ trade offices charge for their services or their annual fee collections. According to SIDO, the states’ fees vary widely. To help make export promotion services more accessible to potential exporters from their states, some states’ trade offices offer grants. Of the 45 states that responded to our survey, 19 reported providing SMEs with grants, and 16 reported providing SMEs with grants or direct payments that could be used to defray the costs of CS’s export promotion programs and services. (See fig. 5.) The rate at which states defray costs or reimburse SMEs for their participation in CS’s programs varies. States’ grants generally range from $1,000 to $5,000 per firm and are often intended for participation in trade shows, trade missions, or other state-sponsored events. States’ trade offices cannot determine what portions of the grants (or direct payments) are used to defray the cost of CS’s user fees; SMEs may use states’ grant funding for a range of other eligible expenses, such as travel and logistical expenses. According to SIDO, states’ funding of CS’s services is highly dependent upon their affordability relative to services available from private consultants and other sources. Three of the six states’ trade offices we visited offer such grants—Connecticut, Mississippi, and Pennsylvania. According to these states’ trade officials, grants are a useful tool to outreach to SMEs that might not have considered exporting and might not be familiar with the costs of export assistance. However, these officials explained that grants in and of themselves often do not determine whether SMEs participate in export promotion programs. CS needs better information to maximize the efficient and effective operation of its export promotion programs and to ensure there is a sound basis for setting its user fee rates. CS decided to base its export promotion user fees on program costs, though it has a yearly legislative exemption from having to recover full costs, and it attempted to recover only a portion of the cost of its services. Nevertheless, CS did not document its methodology, calculations, and support for the assumptions it used to determine the full cost of each type of service; thus, CS cannot ensure its methodology is based on accurate information or is consistently applied from year to year to allow officials to make sound management decisions about its services and user fees. In addition, CS’s cost estimates are not complete. For example, CS’s cost estimates did not include certain costs paid for by other entities on behalf of CS, as federal accounting standards require. Also, CS used potentially outdated and inaccurate 2005 staff time data to estimate its program costs, upon which it based the 2008 user fee structure. Complete and accurate full cost information would assist CS and the Congress in making decisions about resource allocations, evaluating program performance, and improving program efficiency. Finally, CS did not document its procedures and assumptions for setting its user fees, including how it determined incentive rates for SMEs, thus weakening the link between costs and the user fees. CS’s annual appropriation permits it to charge user fees but is silent with respect to setting and revising user fees. Nevertheless, there should be a sound basis for any user fees charged. According to CS officials, CS made a policy decision to use OMB Circular A-25, including its direction to recover full costs, as a guide for establishing its 2008 user fee structure, but it does not attempt to recover the full cost of its services. In annual appropriations since the 1990s, certain provisions of the Mutual Education and Cultural Exchange Act (MECEA) have applied to ITA’s trade promotion activities. Through these MECEA provisions, CS is authorized to accept “contributions” from firms. Under this statutory authority CS charges a fee for services provided in its export promotion programs, but the statute is silent with respect to setting and revising user fees. Furthermore, since fiscal year 2006, the Congress has exempted CS from the requirements of OMB Circular A-25 as part of the ITA’s annual appropriation. However, according to the CS officials who established the user fee policy, CS nevertheless implemented the 2008 user fee schedule with the goal of moving toward OMB’s full cost recovery policy. Our review of various documents and interviews with agency officials indicated that there had been confusion over whether CS is required to comply with Circular A-25. OMB has expressed concern about the adequacy of CS’s understanding of its costs, to which CS has responded. In 2003, OMB found that although CS charges user fees for some services, it did not have a consistently applied pricing strategy for its services, and the infrastructure for capturing cost information was inadequate for making informed decisions. In addition, in 2004, Commerce’s Office of Inspector General reported that CS was not in compliance with Circular A-25 and recommended that CS work with OMB to comply with the circular. In response, CS undertook efforts to determine the full cost of its services and to comply with Circular A-25. Although CS has received a yearly legislative waiver explicitly exempting it from the requirements, in May 2008, CS submitted a request to OMB for a permanent waiver from the full cost recovery provisions contained in the circular. In its request to OMB, CS explained “the user fee schedule moves us closer to the intent of the cost recovery provisions of the OMB Circular without making our services out of reach to SMEs who have less financial flexibility.” According to OMB officials, OMB reviewed the 2008 CS user fee schedule based on estimates of fee collections, which OMB considered reasonable; however, OMB did not review details of the methodology CS used to determine costs and establish the user fees. Following the review, OMB found, “the new fee structure will increase collections and moves toward the goals of the circular.” OMB stated it will continue to work with CS through the executive budget process to ensure the user fee strategy is evolving properly to meet the provisions of OMB Circular A-25. Although OMB did not approve the request for a permanent waiver, OMB told CS the 2008 user fee structure was acceptable with Circular A-25 for fiscal year 2009. OMB Circular A-25, User Charges, establishes, among other things, guidelines for federal agencies for assessing user fees for government services. It provides information on the scope and types of activities subject to user fees and the basis on which user fees are to be set. The circular also provides guidance for agency implementation of user fees and collections and outlines the following several policy objectives: Ensure that each service, sale, or use of government goods or resources provided by an agency to specific recipients be self-sustaining. Promote efficient allocation of resources by establishing charges for special benefits provided to the recipient that are at least as great as costs to the government of providing the special benefits. Allow the private sector to compete with the government without disadvantage in supplying comparable services, resources, or goods where appropriate. In determining full costs to set user fees, agencies may use cost accounting systems—a system designed to consistently produce reliable cost information—or cost finding methodology, which uses cost studies or cost analyses to develop cost information. The methodology the agency uses to determine costs should be appropriate for management’s needs and the environment in which the agency operates. Understanding the full costs of federal programs, including CS’s export promotion programs, is important for several reasons. First, management needs reliable cost information to make resource decisions and to find and avoid waste and inefficiencies. For example, using full cost information, management can make decisions to reduce resources devoted to activities that are not cost-effective. In addition, such information allows managers to compare cost changes over time and identify their causes and reduce excess costs. Second, management needs reliable and complete cost information to assess the extent to which user fees recover the proportion of full costs intended. Third, the Congress and the public can use full cost information to evaluate the performance of federal programs and compare their costs and benefits. For example, full cost information assists the Congress in making decisions about allocating federal resources, and when authorizing and modifying programs. In 2005, CS developed a cost-finding methodology to attempt to determine the full costs of its export promotion services and increased the user fees of one of its most popular services, International Company Profile, to full cost recovery for all firms based on its 2005 cost estimates. CS officials explained that it is currently adopting a new cost accounting system that is used throughout Commerce, which CS officials stated would provide improvements to its cost accounting. However, according to CS officials, this new cost accounting system would not be used to apply salary costs to activities or attribute overhead costs to determine the full cost of services. While CS has taken steps to move toward full cost recovery, CS officials told us they balance this objective with trying to ensure its user fee schedule keeps its services accessible to SMEs. CS officials decided to provide lower incentive rates to SMEs, which only cover a portion of the full costs of these services. The 2008 fee schedule seeks to charge large companies “full cost.” For example, CS charges large firms a $70 per hour rate and charges SMEs approximately 35 percent of that rate, or $25 per hour, for the cost of staff time required to deliver standardized and customized services. CS charges both large firms and SMEs the full cost of third party charges needed to deliver a CS service, such as translation and transportation services. CS relied on a cost-finding methodology to determine full costs, according to CS officials; however, CS did not document the methods, calculations, and support for the assumptions it used to estimate the full cost of each type of service, as called for by federal accounting standards. Instead, CS officials generally described to us its cost-finding methodology, which they said was partly based on the one used for the 2005 user fee schedule, and provided limited documentation of its cost templates. (See table 4.) CS did not document its methods, assumptions, and cost calculations for each service. Better information would raise CS’s awareness of the composition of its costs, and changes to those costs, and allow them to better control and reduce costs where possible and to evaluate program performance. For example, such documentation could allow CS to compare costs among alternatives, such as whether to provide a service in-house or contract it out or whether to continue or eliminate a service. Without complete documentation and support for the specific methodology and information CS used as a basis for determining costs, CS cannot be sure that its chosen cost assignments are reasonable and based on accurate information. In addition, it is not possible to ensure the methodology is consistent from year to year to allow CS to make sound decisions about its services and user fees. The risk of overestimating or underestimating costs may be reduced if CS clearly documents its methods for accounting for program costs and the assumptions used to project future costs. Such documentation would help CS assess whether its estimates are aligned with changes in costs; this is important so that the user fees recover the intended portion of full costs, and CS does not charge firms more or less than intended. However, CS did not document how it assigned costs to each service in enough detail to allow CS staff and other knowledgeable persons to assess these procedures and determine the accuracy of the information used. Furthermore, the lack of documentation makes it difficult to ensure staff are properly trained and consistently apply the methodology so that it produces accurate cost information that can be compared from year to year. Finally, the lack of full cost information makes it difficult for CS and the Congress to accurately evaluate the performance of CS services in light of the true overall costs and determine whether resources are rationally allocated to CS services. Notwithstanding the lack of documentation, we found CS’s cost estimates do not accurately reflect full costs. First, for example, CS did not include certain costs paid on behalf of CS by other entities, as required in federal accounting standards. Without consideration of costs paid by other federal entities, CS’s cost estimates do not reflect full costs to the federal government and are misleading for CS officials and others using that information to make decisions about resource allocations and changes in programs. Moreover, because CS has chosen to base its user fees on the full cost of its services, it needs a reliable accounting of total costs when setting user fees so that they cover the intended share of the cost of its services. According to CS officials, CS did not include certain retirement benefits to be paid by the Office of Personnel Management, including the costs of pensions and health and life insurance, in determining the full costs of its export promotion services. CS estimated the annual cost of these benefits to be approximately $17 million in fiscal year 2008. Full costs should include the cost of such employee retirement benefits according to federal accounting standards. Second, CS used potentially outdated and inaccurate information about staff time spent to estimate program costs and set user fees. These were key data that CS used to assign costs to the activities required to deliver its standardized services. CS officials explained that, as part of its cost-finding methodology, CS surveyed staff to determine the specific step-by-step activities and time staff spent to deliver services and used these data to develop the cost templates to estimate the full costs of each standardized service. However, CS officials stated that, for its most recent user fee adjustment in 2008, CS relied on the survey data it collected in preparation for its prior adjustment of user fees in 2005 and did not update the survey data to ensure its reliability. Thus, these data were potentially outdated and inaccurate. CS officials explained they assumed the activities and staff time required to deliver services had not changed significantly, but they did not justify or provide documentation supporting the assumption the data were reliable in 2008. In addition, we found the accuracy of CS’s 2005 survey data to be questionable. These data were not based on an actual accounting of staff time, according to CS officials. Instead, CS officials explained that these data were based on staffs’ estimates of the amount of time they thought they spent, on average, performing specific tasks to deliver a service. Staff reported widely divergent time estimates for the same activities, which raises concerns about how accurately staff estimated their time. For example, for the activity “identifying and contacting potential partners” staffs’ time, estimates ranged from 1 hour to 20 hours and, for the activity, “final debrief of the client,” the range was 15 minutes to 4 hours. CS officials told us they were satisfied that these estimates were reasonably accurate; however, based on our statistical analysis we disagree and believe they did not sufficiently explain and document support for this assumption. Federal accounting standards recognize the importance of collecting accurate cost information. For example, federal accounting standards state that reliable information on the costs of federal programs and activities is crucial for effective management of government operations. Without supporting its assumptions of its staff time estimates, CS cannot be sure that the cost assignments it used to determine costs were accurate. In addition, inaccurate cost information can skew fee-setting decisions, so CS needs reliable information to ensure that the user fees are aligned with any changes in staff productivity and recover CS’s intended share of program costs. CS does not seek to recover full costs from all firms under its 2008 user fee structure. CS offers lower fees to SMEs as an incentive to purchase CS services, and CS officials explained that factors other than costs contributed to their formulation of the 2008 user fee structure. CS seeks to recover only a proportion of full costs from SMEs, with the remainder of the costs covered by CS’s annual budget appropriation. Lower user fees are an incentive to SMEs to use CS services. However, CS did not sufficiently support and document the methods and assumptions it used, particularly with regard to the lower user fees for SMEs under the 2008 user fee structure. For example, according to CS officials, CS set the current SME-level user fees based on the historical proportion of costs—approximately 35 percent—recovered by the old user fees charged to SMEs. However, CS did not document how it determined this 35 percent incentive level and the level of the newly introduced new- to-export incentive user fees. Instead, CS officials told us they set the level of the incentive user fee for new-to-export SMEs based on their perception of what would constitute a reasonable discount while still signaling that the services are valuable. These officials also stated the new-to-export incentive user fees were set at the same $350 for the most popular SME services to eliminate potential confusion among customers. They explained that CS used informal client and stakeholder feedback, as well as program counts and collections data, to assist in establishing the user fees SMEs are charged for standardized services, but CS did not document this. As a result, CS cannot demonstrate how its cost estimates are linked to the user fees it charges different sizes of firms for each of its services. This information would allow comparisons to inform management and program staff decisions, such as whether to adjust user fees, do a project in-house or contract it out, to accept or reject a proposal, or to continue or eliminate a service. Significant events, which can include key decisions about user fees, are to be clearly documented, according to federal internal control standards. Transparent procedures can contribute toward an improved understanding about the decisions made to establish the user fees and the basis on which those decisions were made. For example, as CS cost-based user fees represent a charge for a specific service received, stakeholders may expect a change in the user fees firms are charged to be related to a change in the true cost of providing services. The extent to which CS’s user fees affect SMEs’ use of its export promotion programs is unclear because CS lacks comparable and reliable historical data on fees charged its customers and has only limited disaggregated data on services sold by company size and type of customers. CS officials informed us that they have performed only limited studies of customer demand, but that CS has recently begun to take steps to improve the quality of data it collects to better evaluate its customer base. Because state governments play a potentially important role in helping their businesses to compete in the global economy, and because they are also partners with and customers of CS, we obtained the states’ trade offices’ views on the user fees CS charges for some of its services. States’ trade offices’ views of the 2005 and 2008 user fee schedules and their projected future use of CS services varied. CS projects a 10 percent increase in SMEs’ total demand for its services in fiscal year 2009 based on its new user fees, but support for this projection is unclear. Factors other than fees, such as the availability and quality of comparable services from private providers, may affect SMEs’ use of CS services. CS lacks reliable and sufficient data on its export promotion fee-based services to evaluate its customer base. We have identified several limitations with regard to CS’s data about: (1) fees charged its customers, (2) the characteristics of its customers, and (3) purchases by location and types of services. CS is taking steps to improve the quality of the data it collects, as well as the integration of its customer data systems, but its officials acknowledged that the Client Tracking System (CTS) may not be fully operational until well into 2009 or beyond. For an entity to run and control its operations and to achieve all its objectives, it must collect and process relevant, reliable, and timely data relating to internal, as well as external events, based on GAO’s Standards for Internal Control in the Federal Government and OMB Circular A-123 on Management’s Responsibility for Internal Control. In addition, effective information technology management is critical to achieving useful, reliable, and continuous recording and communication of information. CS lacks comparable and reliable historical data on the fees charged for each service to measure past and potential effects of user fee changes on its SME customers to ensure that it is charging them the correct fee. Prior to 2005, according to CS officials, there were no set fees, and each post decided what to charge customers for its export promotion services. In addition, according to CS officials, prior to December 2007, every overseas post had its own database of customers, resulting in 80 databases, as well as domestic databases that did not communicate with each other. According to CS, it currently uses two main systems (the eMenu and the CTS) to collect and track data on the programs and services it offers its customers. We observed demonstrations of both these systems in August 2008. While the systems have many useful features and represent promising directions for CS to take, we identified several limitations with regard to computer database design and internal controls. In our review of the data, we noticed instances where companies received a bill that was much larger than the advertised service fee, but the reason (possibly because the extra days and add on services were being included) was not documented. More comparable and reliable user fee data could be used to help CS in determining how changes in its user fees affect SMEs’ demand for its services. CS is limited in its ability to disaggregate the firms that purchased fee services by company size or by export status (whether the firms are new- to-export or had purchased prior export services from CS). Such information would be useful for making managerial decisions and determining which products and services are in demand and which products are purchased less frequently by CS’s different customers. A lack of accurate customer information and good procedures creates the risk of charging CS customers a fee that is inconsistent with their company size or export status. According to CS officials, the eMenu initially relies on the customers to self-report their size and export status. CS officials stated that its trade specialists could verify company size by consulting the Harris database and export status by consulting the CTS. When we examined data for 2008 from the CTS, we found that 16 percent of the companies were listed as being of “unknown size.” In addition, there were numerous inconsistencies in the designation of company size, with more than 200 instances of companies being designated with different sizes. In addition, based on our review of the data, CS has not yet addressed the deficiencies with prior years’ data. The 2006 and 2007 database did not identify company size for at least one quarter of the companies per year. Prior to December 2007, according to CS officials, the size of firms purchasing services was not a mandatory field in the databases. It would require a manual review of the records to determine whether a company meets CS’s new-to-export status. Even with a manual review, the accuracy of the designation about export status would depend on the thoroughness, completeness, and consistency of the entries that had been made by trade specialists and others. Yet our review of CS data raised questions about the manual review because we found at least 30 instances when companies designated as new to export in 2008 had appeared in the prior year’s (2007) database as purchasers of certain CS fee services. CS’s limitations in collecting and processing accurate information on company size and export status reduce its ability to determine which products and services are in demand or underused by its different customers and more importantly to charge firms the appropriate fees. Under the 2008 fee structure, SMEs and new-to-export SMEs are charged lower fees for CS’s services. Due to limitations in CS’s databases, it is difficult to disaggregate purchases by location and types of services. Complete information on the characteristics of CS’s customers, such as geographic location, industry, and services bought and at what price over time, would allow CS to better analyze and understand its customer base and to adjust to changes in market demand. CS’s database has total purchases by states, which not only includes private sector firms but also includes state export promotion agencies, universities, and other entities, making it difficult to obtain an accurate count for only SMEs, for example. We wanted to analyze SMEs’ fee-based purchases by home state for about 5 years to better understand the relationship between CS and SMEs in home states. Since it was not possible to disaggregate only the SMEs’ purchases, we decided to examine the data by home state for all firms. However, CS could only identify companies’ home states by performing a manual review of its records; therefore, the analysis was limited to data for 2 years, 2007 and 2008, and for only four standardized services. See appendix III for information on CS’s customers’ purchases by states for selected services. CS has made some attempts to determine how the user fees affect its customers’ participation in its programs; however, these studies have been limited, according to CS officials, in determining how the fees affect customers’ participation due to lack of sufficient data. One company contracted by Commerce attempted to estimate price elasticity (or sensitivity) of demand, both in 1998 and 1999, using different data. In 1998, the company used data based on a survey of a small number of trade consulting companies on how their customers would have responded to price increases. CS officials said that they did not use the results of the 1998 study in determining the fees to be charged because the analysis was unreliable since it was based on hypothetical data. For the 1999 estimate of price sensitivity, the contractor used customer survey data. However, the data suffer from a low response rate of 11 percent. Again, CS did not directly consider the price sensitivity estimates when making fee decisions in subsequent years. Table 5 provides a summary of some prior assessments of CS’s export promotion programs and user fees. According to CS, it is difficult to compare CS prices to others offering similar services, such as private sector providers. CS officials informed us that many consultants are reluctant to talk to them or share their pricing schedules. GAO also contacted some private sector firms to determine what export advisory services they offer and the fees charged but either these firms did not provide such services or the firms did not respond to our inquiries for information on fees. Our survey of states’ trade offices found none of the 45 states that responded, including those we visited, had conducted an evaluation on the effects of user fees on SMEs’ participation in federal export promotion programs. Because state governments play a potentially important role in helping their businesses compete in the global economy, partner with CS, and are sometimes customers themselves, we obtained the states’ trade offices’ views on the impact of CS’s change in user fees in 2005 on states’ use of certain services. States had mixed views about the impact of CS’s 2005 fee change. Some states said that the introduction of the 2005 fee schedule had no impact on their use of certain CS services, while others said that it caused them to decrease their use of those services. For example, of the states that had a basis to judge, 56 percent (14 of 25) reported that the 2005 fee schedule caused their offices to decrease their use of CS’s Gold Key Service, compared with 44 percent (11 of 25) reporting that their use stayed the same. (See fig. 6.) Based on CS’s data, for the total number of standardized services purchased, SMEs’ participation fluctuated before and after the 2005 fee change. For example, the Gold Key Service showed a decline in purchases of about 26 percent from 2005 to 2006 and slightly increased in 2007 by about 3 percent above the 2006 level. CS officials said that these large changes are due in part to a spike in demand in 2005 as companies rushed to sign up for certain services before the 2005 fees went into effect in April 2005. CS projects a 10 percent increase in SMEs’ total demand for its services in fiscal year 2009 based on its new user fees, but support for this projection is unclear. According to CS officials, although the fees for Featured U.S. Exporter and Business Service Provider (domestic) will increase for SMEs with the new fee schedule, overall collections are expected to rise, with higher demand for services such as Gold Key, International Company Profile, and International Partner Search that are now priced lower in most markets. In particular, CS expects an increase in demand for Gold Key Service in the expensive markets that now offer SMEs lower fees. The 10 percent increase in SMEs’ demand is not based on any analysis of historical data. According to CS, the projected increase is based on anecdotal reports from its offices in the field, some businesses, SIDO, and DEC officials. With the new-to-export pilot incentive fee introduced for the first time, CS also anticipates an increase in demand from new-to-export SMEs, but the assumption of how much demand will change is a “wild guess,” according to CS officials. CS said it expects participation by large firms to remain constant or to decrease moderately. CS arrived at the assumption for large firms on the basis that large firms are less sensitive to the fees and will often use CS’s services to expand their overseas markets even when fees increase. Our survey showed states’ trade offices’ reaction to the new fee schedule was generally positive, but there were some negative views. Most states view CS’s new fee schedule to be reasonable. As figure 7 shows, almost two-thirds (24 of 37) of the states that had a basis to judge reported that they considered CS’s new fee schedule to be very reasonable or somewhat reasonable. In addition, some DEC members in the states we visited believed the new fee schedule for SMEs is reasonable but expect the effect of the fees to vary by company. Some states’ trade offices elaborated on their views regarding the reasonableness or unreasonableness of the new user fees. For those states’ trade offices that considered the new fee schedule to be somewhat or very reasonable, one said that it is very happy with the new fee schedule and has been promoting it and that the concept behind the low fees for new-to-export companies is “really brilliant.” Another state said that SMEs will still complain about having to pay for services, but that the new fee schedule is fair and makes CS’s services much more accessible for very small firms, while another state said that lower fees for SMEs is a good start but ignores the need to invest more in trade and investment promotion. For those states’ trade offices that considered the fees to be somewhat or very unreasonable, one said that SMEs need assistance and support to increase exports and that CS should provide available services at reasonable cost instead of trying to get more money from U.S. business taxpayers. One state said that the majority of its companies have fewer than 10 employees and that these companies find it difficult to justify paying the government fees for services, while another state said that the majority of its SMEs are not currently using CS’s programs due to the cost involved and that alternatives may be less expensive but take longer to achieve similar results. We asked states’ trade offices for their views on the new fees’ projected impact on their use of certain services purchased directly to assist SMEs. Of the three standardized services we asked about (Gold Key, International Company Profile, and International Partner Search), at least 85 percent of those that had a basis to judge for each service said that their use would increase or stay the same. For example, 27 of the 30 states (90 percent) that had a basis to judge reported that their use of the Gold Key Service for SMEs would either increase or stay the same under the new fee schedule. (See fig. 8.) We also asked states’ trade offices their views on the new fees CS charges certain customers compared with the fees charged by private sector providers. More than two-thirds of the states that had a basis to judge (27 of 39) indicated that Commerce’s new fees for new-to-export SMEs were about right compared with fees charged by private sector providers. However, states’ trade offices had mixed views about the new fees charged to SMEs that already export, and for each of the CS services about which we inquired (Gold Key, International Company Profile, International Partner Search, FUSE, Domestic Business Provider, and the customized services), roughly half of those that had a basis to judge responded that the fees were about right while roughly half reported they were too high compared with private sector providers. For example, figure 9 shows that more than half (16 of 27) thought that the new customized fees charged to SMEs that already export were somewhat or much too high compared with the private sector. According to SIDO, states can obtain cost competitive services in some markets, but such private sector alternatives are not universally available. CS also subcontracts with private sector providers at lower rates than those of its employees. These private providers state that their knowledge of a particular market or its operational efficiencies allows them to offer lower cost services, such as matchmaking, according to SIDO. However, SIDO states that some states’ trade offices might choose CS’s services because they are more comfortable working with a federal agency and CS generally offers superior quality control to private sector alternatives. One study prepared for CS found that export promotion services available from private enterprises and trade groups vary in price, but that a number of private providers’ services are significantly more expensive. In addition, the study reported that, in some cases, these enterprises and trade groups work with CS to develop products and services and that some repackage and sell CS’s products, particularly market research and contact development information. Almost all of the states’ trade offices with a basis to judge responded that SMEs’ use of CS’s services would decrease if they were charged the same fees as large firms (which, according to CS, represent the full cost of services). For example, more than 70 percent of these respondents indicated that they would expect a great or very great decrease in services, such as Gold Key (28 of 37) and International Company Profile (22 of 30) if fees were the same as those charged for large firms. Further, DEC members and USEAC officials in the states we visited expect that there would be significant decreases in SMEs’ demand for CS’s services if they were charged full costs for export promotion services. Factors other than fees may affect SMEs’ choice to use CS’s services including: (1) the types of services CS offers compared with other providers, (2) the individualized attention received, and (3) the quality of the service. First, some states’ trade offices and other sources reported that factors such as the types of services required influence the choice of services purchased from CS versus other providers. A 2002 study prepared for the Trade Promotion Coordinating Committee found that more than half of the services used by SME exporters were obtained from the private sector, which leads in providing transaction-related services, such as freight forwarding and helping firms to develop Web sites to promote products to foreign buyers. The study reported that the government’s role, including CS’s role, was seen as strongest in the provision of basic information to exporters, such as “how to export” information, Web-based information on markets, export counseling, and government procedures overseas. In addition, our survey revealed that 36 of 45 states’ trade offices (80 percent) use private consultants, including private businesses and American Chambers of Commerce, as providers for trade promotion services. For example, some states’ trade offices use private consultants for trade missions and trade shows, market research, and arranging company meetings, which are services CS provides. Services that CS does not provide and which states’ trade offices obtain from the private sector include assistance in setting up offices in a foreign country, assistance in sourcing products or manufacturers with sourcing a manufacturing partner, having prototypes or product samples made, and freight forwarding. Second, transaction-related services tend to require individualized attention, which is another factor that may influence SMEs’ choice of whether to obtain services from CS versus other providers. Based on the 2002 study, Commerce was seen as not being as well-positioned as private providers to provide the intensive attention that transaction-related services may require. We also spoke with officials of one large American chamber of commerce operating in a key market who informed us that its members, including SMEs, are attracted to private providers’ intensive “handholding,” which, according to these officials, CS is not well known for providing. For example, this chamber of commerce offers a Corporate Visa Program, which, according to the officials, actively helps its member companies to complete paperwork and expedite the visa process within a 1-week time frame. In addition, one state trade office also said that contractors or private consultants offer in-country coordination and individualized attention that the CS no longer offers. The quality of service is also a key factor that influences SMEs’ choice of where to purchase services. A 2003 study estimated that fees for some CS services that SMEs demand were lower than market comparisons but that CS’s market share based on total demand by SMEs for products and services similar to CS’s was relatively small; it suggested that the quality or type of services provided by CS may not match the quality or type of services demanded by SMEs. One DEC member also told us that companies may perceive the quality to be better in the private sector since prices for similar services tend to be higher in the private sector and that, in some instances, inexperienced CS staff performed work, which may have led some businesses to the private sector. One state trade office said that the quality of CS’s service depends on staff dynamics at the individual post. Another state trade office said that, while price is important, the delivery of consistent quality is more important to companies and that it relies on CS to provide quality service. According to CS, its customer surveys indicate that quality is a key factor in their choice of where to purchase services. CS officials said that, in reviewing the surveys’ open- ended questions, companies cited three drivers of client satisfaction: communication, quality, and consistency. Price paid for the services, according to the officials, has not been of equal importance. SIDO officials noted that another factor that may influence SMEs’ participation in CS’s programs is small firms’ general level of awareness about the states’ and federal government’s export promotion efforts. SIDO officials expressed concern that domestic firms’ awareness about U.S. export promotion programs is less than that of foreign firms about the programs in competing countries. However, according to the 2002 study, small and medium-sized exporters are generally aware of government programs that can help them export, but there is still room for improvement. Exporters appear to be broadly familiar with Commerce. However, SIDO advocates for more resources for CS and state outreach to small firms in order to raise the profile of their programs and increase participation. CS and states’ trade offices provide various types of export promotion programs. These programs share similar goals of increasing the number of exporting firms, especially SMEs, expanding existing markets, and opening up new markets to U.S. exports. Targeting federal government resources to programs that achieve the goals outlined in the National Export Strategy requires knowledge of whether existing programs contribute to these goals, whether customer experiences reveal suggestions for enhancing these programs, and knowledge of the extent to which current intergovernmental partnerships contribute to export promotion goals. Commerce’s $235 million export promotion program currently collects about $10 million annually through fees on some services. Commerce decided to collect these fees to cover at least a portion of the costs for providing some of its services. However, Commerce lacks good information on the true costs of providing these services, both fee-based and those offered for free. As a result, it is unclear whether the fees they established reflect their policy objectives or whether they optimize the efficient and effective management of these programs. Similarly, Commerce lacks reliable information about the size, location, and type of its customers, and about how its fees (or lack thereof) affect their access to the program, or how they compare to state or private sector fees. Fees for particular services affect firms’ access to and use of federal export promotion programs. Better information would help CS market its program better, adjust to changes in the marketplace, and address those areas that maximize the impact of its services on promoting U.S. exports. Not much is known about the extent to which user fees or other factors influence SMEs’ decisions to rely on CS for export promotion services. Studies and other sources suggest that the types of services CS offers compared with other providers, the level of individualized attention provided, and service quality are factors that also affect SMEs’ choice to use CS’s services. Better evaluation of fee-based programs and customers, including states, could improve program continuity, help managers target their resources more efficiently and effectively, assess costs and benefits, and help the Congress make more informed funding decisions. Commerce has taken some initial steps in developing systems that could improve this situation, but it is unclear whether they intend to fully develop this potential. We recommend that the Secretary of Commerce direct the Assistant Secretary for Trade Promotion and Director General of the U.S. and Foreign Commercial Service to (1) take steps to improve the collection, processing, and documentation of cost information on its export promotion programs and user fees in order to enhance efficient and effective management in line with federal accounting and internal control standards. These steps could, for example, include documenting the procedures and processes of the costing methodology in sufficient detail so that staff who work with costing at a later point could understand the specific procedures used and the data sources and cost assignment methods for each step in the process; incorporating costs paid by other federal entities for CS’s benefits, such as pensions and health insurance paid for by the Office of Personnel Management when determining the full cost of each service; updating estimates of the amount of time staff spent performing various activities to realize any efficiency gained and to provide more accurate estimates of full costs; and documenting the methods and assumptions for establishing the user fees CS charges different firms for each service to clearly show the linkage between costs and user fees, particularly with regard to the lower user fees for SMEs. To better understand demand for CS export promotion programs and the level of participation attributable to its user fees, we also recommend that the Secretary of Commerce direct the Assistant Secretary for Trade Promotion and Director General of the U.S. and Foreign Commercial Service to (2) ensure that the design of CS databases and procedures followed by those entering the data enable CS to produce more accurate, reliable, and complete data on its customers and services, including all fees charged, company size, and export status. Commerce concurred with our recommendations and stated that CS would take steps to improve the collection, processing, and documentation of cost information on its export promotion programs. Commerce stated that CS developed its new user fee policy from the most accurate data available from its existing database and that its accounting systems were not deficient based on receiving an unqualified audit opinion on its annual financial statements. Its technical comments mentioned CS’s conversion to a new financial accounting system, Commerce Business System, which management expects will improve information about CS’s costs of delivering services. We support CS’s implementation of an improved financial accounting system. We remain concerned, however, that potentially outdated and inaccurate nonfinancial data that are used to determine the unit cost of specific services, such as the time staff spend performing various activities, may not be updated by the new system. Updating that information will help ensure that the full costs of specific services are considered when setting fees. In addition, although an entity’s audited financial statements and unaudited cost accounting analyses may use the same underlying financial data, an auditor’s opinion on the financial statements does not provide assurance concerning the reasonableness of cost analyses performed using that data. Commerce also noted an increase in fees collected and services provided to SMEs in fiscal year 2008, which they believe indicates their products and services remain accessible to SMEs. However, we believe that missing and inaccurate data about company size mean that CS cannot reliably or accurately estimate the volume of services provided to SMEs or the fees collected from them. In addition, CS’s response relies on aggregate analyses between fiscal year 2007 and fiscal year 2008 that did not take into account changes in the mix of services provided or longer term trends and, therefore, does not provide useful information about the impact of its 2008 fee schedule on SMEs. Further, Commerce stated that its trade promotion services are greater in depth and scope than those provided by the states, and we discussed this in our report. We clarified this point in various places in our report, taking into account some related information that we received in technical comments from agency officials. Commerce’s comments, along with our responses to specific points, are reprinted in appendix IV. Commerce also provided technical comments, which were incorporated into the report, as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to interested congressional committees and the Secretary of Commerce. The report also will be available at no charge on the GAO Web site at http://www.gao.gov. Should you or your staff have any questions about this report, please either contact me at (202) 512-4347 or [email protected] or Stanley J. Czerwinski at (202) 512-6806 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs can be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. Our objectives were to evaluate (1) the relationship between the U.S. Commercial Service (CS) and states’ trade offices’ export promotion programs, (2) CS’s methodology and practices for determining costs and establishing user fees, and (3) how CS’s user fees affect small and medium-sized enterprises’ (SME) use of its programs. Our scope of work focused on the Department of Commerce’s (Commerce) U.S. Commercial Service’s and the 50 states’ trade offices’ export promotion programs and associated user fees. To determine what export promotion services states’ trade offices provide and their relationship with CS’s programs and user fees, we reviewed and analyzed both CS’s and states’ trade offices’ export promotion programs and user fee data; data on domestic and overseas staff; export promotion budgets; states’ export promotion grant programs, and services states trade offices purchased from CS. In addition, to obtain information on the states’ trade offices’ export promotion programs, fees, grants, and the importance of Commerce to their activities, we surveyed the 50 states’ trade offices. We developed our survey instrument between January and late April 2008. To ensure that the survey respondents understood the questions in the same way, that we had used appropriate terms for this population, and that we had covered the most important issues, we conducted three expert reviews and three formal pretests. We received 45 responses from the 50 states’ trade offices, or a 90 percent response rate. The survey and a more complete tabulation of the results are provided in a supplement to this report (see GAO-09-148SP). We also conducted site visits in 6 states (California, Connecticut, Idaho, Illinois, Mississippi, and Pennsylvania). We chose these states to ensure a range of characteristics based on the following criteria: the size of the state trade promotion budget, the existence of a grant or subsidy program that funds SMEs’ participation in CS’s export promotion programs, states’ trade offices collocated with U.S. Export Assistance Centers, the number of overseas states’ trade offices and representatives, size of the state’s economy and population, and states that do not have trade offices. We also reviewed and analyzed information in the 2005-2007 National Export Strategy reports and the State International Development Organizations’ (SIDO) annual survey results of states’ trade offices. Based on interviews and our analysis, we determined that SIDO’s data were sufficiently reliable for our purposes. Information on all the states’ export promotion budgets were difficult to obtain, and reliable and current data were only available from SIDO for 27 states; however, we used data for only 24 states in our analysis because 3 states did not disaggregate their export promotion budgets from their foreign investment recruitment budgets. To determine CS’s procedures for determining costs and establishing user fees, we interviewed key CS and International Trade Administration staff and reviewed and analyzed available documentation about CS’s export promotion programs and user fees based on the 2005 and 2008 user fee changes; CS’s methodology for full cost recovery; cost templates of CS’s fee-based export promotion programs; data on CS’s budget and staff; data on staff time spent on various activities to deliver services; legislation authorizing CS to charge a fee for services (annual appropriations and the Mutual Education and Cultural Exchange Act); OMB Circular A-25, User Charges; Statement of Federal Financial Accounting Standards 4: Managerial Cost Accounting Standards and Concepts; GAO’s Standards for Internal Control in the Federal Government; and GAO’s Federal User Fees: A Design Guide. We did not need to perform an assessment of the reliability of the export promotion programs’ cost and user fees data because we did not use the data but noted weaknesses in the cost-finding methodology. To determine what is known about how CS’s export promotion programs’ user fees affect SMEs’ participation in its programs, we reviewed and analyzed past export promotion programs and user fees studies performed for Commerce by Booz Allen and Hamilton, Inc., KPMG LLP, and Chemonics International. In addition, we reviewed and analyzed the Office of Management and Budget’s 2003 and 2008 Program Assessment Rating Tool for the CS. We also reviewed and analyzed CS’s fee-based export promotion services purchased by its customers and the associated collections from these purchases from 2004 to 2008. While we cited data elements on clients and collections for 2008, having determined that these elements were sufficiently reliable for our purposes, we noted that other data elements, particularly company size and export status, are not fully reliable for the reasons that we have elaborated upon in the report’s third objective. We also reviewed ad hoc feedback CS received on its user fees from CS’s field staff, client firms, District Export Councils, states’ trade offices, and trade and industry associations. Further, we analyzed our survey results regarding states’ trade offices’ views on the impact of the 2005 and 2008 user fees changes on their purchase of CS’s services. We obtained the states’ trade offices’ views for several reasons: (1) they are experts in offering export promotion programs and services; (2) they work with SMEs that export and, in many cases, they work with the same SMEs as CS; (3) they are purchasers and multipliers of CS’s fee services, as well as purchasers of private sector fee services and are able to compare and contrast these service providers; and (4) our research at the beginning of our review indicated that it would be feasible to survey the states within our time frame and achieve an acceptable response rate. Further, we interviewed Commerce officials in Washington, D.C., and at the six U.S. Export Assistance Centers we visited, as well as officials of the six states’ trade offices, District Export Councils, the Office of Management and Budget, the State International Development Organizations, and American chambers of commerce. To determine the purposes for which we could and could not use Commerce data on customers served and the dollars collected, we interviewed agency officials, attended a demonstration of CS’s data systems, and performed checks and analyses of the data themselves. We determined that the data were sufficiently reliable in the aggregate to report on fee services provided, in a broad sense, and dollars collected. We also determined that the data were sufficiently reliable to report on selected fee services by the state of the company purchasing the service, though with the caveat that we could not examine the data by company size. However, we noted several limitations in the data, which we discussed in the body of this report. In particular, the data do not provide accurate counts by company size and export status. Moreover, the data only provide an incomplete picture of the fee services purchased by states’ trade offices. We based our review on various internal control standards, such as the GAO’s Standards for Internal Control in the Federal Government; the Office of Management and Budget Circular A-123, Management’s Responsibility for Internal Control; Internal Control – Integrated Framework, by the Committee of Sponsoring Organizations of the Treadway Commission, as well as GAO’s guidance on Assessing the Reliability of Computer-Processed Data. We conducted this performance audit from October 2007 to March 2009, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform our audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe the evidence obtained provides this reasonable basis. CS and states’ trade offices both maintain offices in domestic and international locations to help firms identify export opportunities. CS’s trade specialists are currently working in 108 cities in 47 states and Puerto Rico. CS does not currently have an office in Alaska, Delaware, and Wyoming, and CS provides services to customers in these states from U.S. Export Assistance Centers (USEAC) in neighboring states. For example, the USEAC in Seattle provides services to customers in Alaska. In addition, USEACs are colocated with 11 states’ trade offices. CS’s trade specialists also work in 75 countries or 124 offices worldwide. In some countries, such as Brazil, China, and India, CS has offices in 5 or more cities. In addition, most states’ trade offices have 1 or more overseas offices. For example, in 2008, there were 34 countries in which at least one state trade office maintained an office or representative. In some countries, such as China, multiple states maintain offices, and individual states maintain offices in more than one city. CS also operates offices in each of these 34 countries. However, CS operates in 41 countries where states do not have representation, and some states’ trade offices explained they rely heavily on CS services in these countries. The following map (see fig. 10) shows CS’s domestic and international locations. The number of CS’s staff varies across countries, and states’ overseas offices vary in size and composition. The number of CS’s trade specialists working in its overseas offices varies widely across the countries in which it operates. For example, CS has 104 staff in China, 30 staff in Germany, and 12 staff in Australia. In addition, some states maintain large overseas offices in certain countries, which tend to be staffed with full-time employees. Other states’ overseas offices are staffed by part-time private consultants working on contract or volunteer representatives. For example, states often maintain full-time offices in primary overseas markets, such as Mexico and Japan, while states tend to employ part-time consultants in smaller markets. Recently, CS and many states’ trade offices have reduced or consolidated their overseas offices but have maintained or opened offices in key markets. CS has recently undertaken the Transformational Commercial Diplomacy (TCD) initiative, which seeks to shift CS resources from more accessible overseas markets to less accessible markets to better align the needs of U.S. exporters with CS resources. Under the TCD initiative, CS has closed a number of small offices in well-developed markets or in small markets with limited commercial opportunities to open offices in new emerging markets with greater commercial potential, such as China and India. For example, under TCD, CS has closed 22 offices and opened 4 offices in Qatar, Tunisia, Libya, and Afghanistan. In addition, CS plans to open additional offices in Baku, Azerbaijan; Wuhan, China; Porto Alegre, Brazil; and Racife, Brazil; and add staff at offices in China and India. Similarly, many states’ trade offices have reduced or consolidated their overseas offices but have maintained and opened offices in key overseas markets. For example, some states’ trade offices have consolidated their overseas offices in multiple countries of a particular region, such as Europe or Asia, to cover the entire region from a single office. However, many states continue to maintain and expand overseas offices in key markets, including China and Japan. For example, in recent years many states have opened offices in multiple cities in China. This table presents selected fee services purchased by CS’s customers in each state for 2 years. The data are for 2007 and 2008 and include 5,890 standardized fee services purchased, out of a total of more than 30,000, or about 20 percent for all fee services in those years. However, they include four of the five standardized fee services that CS offers (Gold Key, International Company Profile, International Partner Search, and Featured U.S. Exporter). Table 6 shows selected CS services that firms purchased from CS by home state in fiscal years 2007 and 2008, sorted by the number of services purchased per state. The following are GAO’s comments on the Department of Commerce’s letter dated February 18, 2009. 1. Also, Commerce’s technical comments mentioned CS’s conversion to a new financial accounting system, Commerce Business System, which management expects will improve information about CS’s costs of delivering services. We support CS’s implementation of an improved financial accounting system. We remain concerned, however, that potentially outdated and inaccurate nonfinancial data that are used to determine the unit cost of specific services, such as the time staff spend performing various activities, may not be updated by the new system. Updating that information will help ensure that the full costs of specific services are considered when setting fees. 2. We believe that missing and inaccurate data about company size mean that CS cannot reliably or accurately estimate the volume of services provided to SMEs or the fees collected from them. In addition, CS’s response relies on aggregate analyses between fiscal year 2007 and fiscal year 2008 that did not take into account changes in the mix of services provided or longer term trends and, therefore, does not provide useful information about the impact of its 2008 fee schedule on SMEs. 3. Commerce commented that CS’s trade promotion services are greater in depth and scope than those provided by the states, and we discussed this in our report. We clarified this point in various places in our report, taking into account some related information that we received in technical comments from agency officials. In addition to the individuals named above, Adam Cowles, Assistant Director; Michelle Sager, Assistant Director; Martin De Alteriis, Assistant Director; Jack Warner, Assistant Director; Yesook Merrill, Assistant Director; Karen Deans; Bradley Hunt; Grace Lui; and Barbara Shields made key contributions to this report. In addition, the following staff provided technical assistance: Jacqueline Nowicki, Assistant Director; Etana Finkler; Sheila Rajaibiun; and Jena Sinkfield.
Federal and state trade promotion activities are intended to help U.S. firms compete successfully in foreign markets. Small and medium-sized enterprises (SME)--firms with fewer than 500 employees--represent a key segment of exporting firms. GAO was asked to determine (1) the relationship between the Department of Commerce's (Commerce) U.S. Commercial Service (CS) and states' trade offices' export promotion programs, (2) CS's methodology and practices for determining costs and establishing user fees for export promotion services, and (3) how CS's user fees affect SMEs' use of its programs. GAO conducted a survey of states' trade offices and reviewed data such as export promotion budgets and fees, program information, government standards, and user fee studies. GAO met with officials from Commerce, the State International Development Organizations, six states' trade offices, and others. Both CS and most states' trade offices provide various types of export promotion services. However, states have limited resources and scope when compared with CS's $235 million budget and large overseas staff. Thus, most states responding to GAO's survey reported that CS's services are important to their export promotion capabilities. State offices often partner with CS on trade missions and other activities. CS and most states focus their efforts on encouraging SMEs to participate in their programs, but user fees can influence whether firms choose to access export promotion services. CS lowers fees for SME exporters, but about a third of the states said they provide grants or payments to defray firms' costs and facilitate access to CS's programs. CS needs better information to maximize the efficient and effective operation of its programs and to ensure that there is a sound basis for setting fees. CS set user fees in May 2008 guided by the Office of Management and Budget's (OMB) full cost recovery policy. However, CS has had a yearly legislative exemption from having to recover full costs through its fees and attempted to recover only a portion of the full cost of its export promotion services. CS did not support and document the methodology and assumptions it used to determine costs and cannot ensure its cost information is consistent and reliable and in accordance with government standards. GAO found significant instances where CS used incomplete and potentially inaccurate data. Complete and accurate full cost information would assist CS and the Congress in making decisions about resource allocations, evaluating program performance, and improving program efficiency. Finally, CS did not document how it established the lower user fees for SMEs and cannot show how the fees it charges different firms for each service link to costs. The extent to which CS's user fees affect SMEs' use of its export promotion programs is unclear. CS lacks reliable and sufficient data to evaluate its customer base and needs to ensure it charges firms the right fees. CS lacks reliable historical data on fees charged, firm size and status, and purchases by location and type. CS is taking steps to better evaluate its customer base. GAO's survey showed that most states reported the 2008 user fees to be reasonable but thought fees charged SMEs for some services were too high when compared with those charged by private sector providers. CS projects an increase in SMEs' demand for its services, but the projection is not based on any analysis of historical data. Relevant studies and other sources suggest that the types of services CS offers compared with other providers, the level of individualized attention provided, and service quality are factors that also affect SMEs' choice to use CS's services.
The SES is relatively small—about 7,900 members in 2013—and represents less than one percent of the over two million federal civilian employees. As a corps of executives selected for their leadership qualifications, members serve in the key positions just below the top presidential appointees. SES members are the major link between these appointees and the rest of the federal work force. They operate and oversee nearly every government activity in approximately 75 federal agencies. OPM manages the overall federal executive personnel program, and OPM staff provides the day-to-day oversight of and assistance to executive branch agencies as they develop, select, and manage their federal executives. OPM has a key leadership and oversight role in the design and implementation of executive branch agencies’ SES performance-based pay systems by certifying that the agencies’ systems meet certain criteria. Specifically, agencies are allowed to raise SES basic pay and total compensation caps if OPM certifies, with the concurrence of OMB, that agencies’ performance appraisal systems make—in design and application—meaningful distinctions based on relative performance. Agencies’ performance appraisal systems are evaluated against certification criteria, including linking performance for senior executives to the organization’s goals. Barring any compliance problems that might arise after certification has been awarded, full certification is for about 24 months. Provisional certification for about 12 months is awarded when an appraisal system meets design requirements, but there is insufficient documentation to determine whether implementation meets certification requirements. Certifying SES performance appraisal systems is also OPM’s opportunity to ensure that these systems meet statutory requirements. From July 2011 through January 2012, OPM, SES members, and other agency representatives from various agencies and organizations developed a new SES performance appraisal system to try to meet the needs of executive branch agencies and their SES members. Under the new system, agencies were intended to have a more consistent and uniform framework to communicate expectations and evaluate the performance of SES members. While promoting greater consistency, the new system was also designed to enhance clarity, transferability, and equity in the development of performance requirements, the delivery of feedback, the development of ratings, and the link to compensation. OPM stressed that a major improvement of the new system included dealing with the wide disparity in distribution of ratings by agency through the provision of clear, descriptive performance standards and rating score ranges that establish mid-level ratings as the norm and top-level ratings as truly exceptional. While agencies are not required to adopt the new system, OPM encourages agencies to do so. Table 1 shows the criteria and documentation needed for distinctions in performance and differentiation in pay and a description of the guidelines used for those criteria. (Appendix II has a complete copy of OPM’s certification report form.) In our 2008 report on SES performance management systems, we noted that while OPM certified that the selected agencies were making meaningful distinctions based on relative performance as measured through the pay and performance differentiation certification criteria, performance ratings at the selected agencies raised questions about the extent to which meaningful distinctions based on relative performance were being made and how OPM applied these criteria. For example, we reported that fiscal year 2007 SES ratings were concentrated at the top two levels. As part of making meaningful distinctions in performance, OPM has emphasized to agencies through its certification guidance that its regulations prohibit forced distribution of performance ratings and that agencies must avoid policies or practices that would lead to forced distributions or even the appearance of it. We recommended that OPM strengthen communication with agencies and executives regarding the importance of using a range of rating levels when assessing performance while avoiding the use of forced distributions. We also noted that communicating this information to agencies will help them begin to transform their cultures to ones where a “fully successful” rating is valued and rewarded. OPM implemented the recommendation, and the agency has been communicating the importance of using a range of rating levels through the new SES performance management system. In 2003, when Congress refined the pay systems for members of the SES by requiring a clearer link between performance and pay, many senior executives were receiving the top rating. Under its regulations, OPM requires agencies to write performance requirements for each senior executive at the “fully successful” level. In addition, under OPM’s new SES performance appraisal system, a “fully successful” rating indicates a “high level of performance” and “effective, solid, and dependable” leadership. A rating of 5 is the highest, (labeled “outstanding”), followed by 4 (“exceeds fully successful”), 3 (“fully successful”), 2 (“minimally satisfactory”), and the lowest rating of 1 (“unsatisfactory”). Executives with a rating of 2 or 1 are ineligible for performance awards, and a rating of 1 also triggers immediate additional performance actions. An executive receiving a rating of 1 must either be reassigned or transferred within, or removed from, the SES. For fiscal years 2010 through 2013, all of the CFO Act agencies had four or five rating levels in place for assessing SES performance. Figure 1 shows SES performance rating distributions for the CFO Act agencies for those years. As the figure shows, more than 85 percent of career SES were given a rating of either 5 or 4 each year. For the same four years, approximately 46 percent of career SES members received the highest possible rating. At a few agencies, the proportion of senior executives who received a rating of 5 was larger than 70 percent. Table 2 shows the number of career SES rated and the percentage at each rating level for the 24 CFO Act agencies for fiscal year 2013. (Appendix III shows career SES ratings and performance awards for the 24 CFO Act agencies for fiscal year 2013.) A small proportion of senior executives received a rating of 3 or lower. For fiscal years 2010 through 2012, about 13 percent of career executives were given a rating of 3. For fiscal year 2013, 641 (or 10.3 percent) of executives received a rating of 3, and at a third of the agencies, less than 5 percent were given a rating of 3 or lower. Twenty-one (or 0.3 percent) senior executives were rated less than fully successful. Across all of the CFO Act agencies, 17 executives received a rating of 2 for fiscal year 2013, and 4 executives received a rating of 1. Budget constraints have affected SES performance awards in recent years, and the number of executives receiving performance awards and the size of awards has decreased since fiscal year 2010. While legal requirements have not changed—that SES performance awards be between 5 and 20 percent of an individual executive’s rate of basic pay— OPM and OMB issued guidance in June 2011 that capped spending on SES performance awards at no more than 5 percent of aggregate SES salaries at a given agency, rather than the normal cap of 10 percent. This cap was further reduced in February 2014 to 4.8 percent of aggregate salaries. Additionally, senior executives did not receive pay adjustments for three years (from January 1, 2011 through December 31, 2013) due to federal pay-freeze legislation, which included a prohibition on SES pay increases. Figure 2 shows a timeline of selected events affecting SES performance awards from 2003 through 2014. Table 3 shows the average SES performance awards for the four fiscal years in inflation-adjusted dollar amounts and as a percentage of base salary. To deal with the effects of sequestration, DOD chose to limit funding for SES performance awards to 1 percent of aggregate career SES salaries for fiscal year 2013. Several other agencies also limited funding for performance awards to around 1 percent of aggregate career SES salaries. This is one reason for the sharp decrease in the percentage of SES receiving performance awards and the decrease in average award amounts across the 24 CFO Act agencies for fiscal year 2013. Officials at several agencies said that in recent years, budget constraints have forced them to make difficult decisions about how to allocate limited award money and still make distinctions in performance pay. Figure 3 shows the average SES performance awards by rating level for the 24 CFO Act agencies for fiscal years 2010 through 2013. Since 2010, agencies have made smaller distinctions in performance award amounts between senior executives rated at different levels of performance. For example, for fiscal year 2010, the average performance award for an executive with a rating of 5 was $4,991 more than the average award for an executive with a rating of 4. By fiscal year 2013, the average performance award for a rating of 5 was $2,604 more than the average award for a rating of 4. In a report on federal performance management, we noted that it was frequently perceived that ratings were inflated by supervisors because, among other things, they were used for multiple decisions involving pay and awards, which can create a situation in which a significant number of employees are rated in the “outstanding” and “exceeds fully successful” levels. To the extent that employees with such high ratings do not receive a monetary award, the perception that rewards are not directly linked to performance is reinforced. On the other hand, as the number of individuals receiving monetary awards increases, the average dollar award will be reduced, resulting in the perception that the awards are less motivating. Since fiscal year 2010, the percentage of eligible SES receiving a performance award at each rating level has decreased. Figure 4 shows the percentage of SES receiving a performance award by rating for fiscal years 2010 through 2013. To help assess how agencies are meeting the certification requirement to make distinctions in pay based on performance, OPM uses Pearson correlation coefficients as a metric to analyze the strength of the relationship between executives’ pay adjustments and performance awards and their ratings. Correlation coefficients measure the linear association between SES performance pay and senior executives’ ratings, and the value of the coefficient will be lower if the actual relationship between the two is not a straight line. For this reason, differences between years or agencies in the value of the correlation coefficient may not be meaningful. To meet the certification guidelines for pay differentiation, agencies are generally expected to have a correlation coefficient of 0.5 or greater; if the correlation coefficient is lower than 0.5, the guidelines state that an agency’s SES appraisal system can still receive full certification if pay and awards data show the system makes distinctions in pay. OPM reported that fiscal year 2013 correlations of SES ratings and performance pay ranged from .19 to .99 for the 24 CFO Act agencies: the National Aeronautics and Space Administration had the lowest coefficient and the Nuclear Regulatory Commission had the highest. Although correlation coefficients are useful for measuring the strength of the linear relationship between ratings and performance pay, they do not measure whether there are meaningful distinctions in pay based on performance. For example, if an agency makes small distinctions in pay across different rating levels (such as 5 percent of salary for executives rated 4 and 5.05 percent of salary for executives rated 5), the value of the correlation coefficient may be high, even though the difference in pay between rating levels is not meaningful. According to a 2012 OPM document on the new SES performance appraisal system, with a different SES system in each agency, inconsistency among the executive branch agencies was a problem because of different definitions for rating levels across government, a mix of four- and five-level rating systems, and variable application of rating levels in evaluating SES—which led to a disparity in the ratings distribution across government. OPM officials said that when the new SES performance appraisal system was first available in fiscal year 2012, seven agencies used the system because they were already closely aligned with it. Since then, OPM officials said that about 90 percent of agencies have started to use the new system. As more agencies adopt the new system, the system’s intent of promoting greater consistency may also result in greater uniformity in the development of ratings with their link to compensation. As mentioned previously, under the new system, OPM stated that agencies will be able to rely upon a more consistent and uniform framework to communicate expectations and evaluate the performance of SES members. While promoting greater consistency, the new system was also intended to enhance clarity, transferability, equity in the development of performance requirements, and the delivery of feedback. In addition, OPM noted that SES mobility is complicated by inconsistency—for executives moving between agencies or considering moving, there has been uncertainty regarding performance evaluations. The new SES appraisal system was intended to help address all of these areas. Of the five systems we reviewed—DOD, Energy, HHS, DOJ, and Treasury— DOD, Energy, and HHS used the new SES performance appraisal system. DOJ and Treasury were in the process of converting to the new system. For example, a human capital official from Treasury said the department is transitioning toward using government-wide performance requirements; the official said Treasury’s fiscal year 2013 system had three critical elements rather than the five in OPM’s standard version. Four out of the five departments had automated SES appraisal systems or had plans to convert to one within the next fiscal year; DOJ did not. In addition, all of the five selected departments had performance appraisal systems that were certified by OPM and OMB. DOD, Energy, and DOJ had full certification; HHS and Treasury had provisional certification. As mentioned previously, for an agency’s SES performance appraisal system to be certified, agencies’ appraisal systems must meet criteria including that there is alignment between organizational and individual performance and that distinctions in pay are being made based on performance. The selected departments were all on a fiscal year appraisal period with ratings decisions for the previous fiscal year made in the first few months of the next fiscal year. Department officials told us that the process of issuing ratings and making awards decisions happens in a fairly short timeframe. We previously identified the alignment of individual performance expectations with organizational goals as a key practice for an effective performance management system. It is important for individuals to see a connection between their daily operations and results to help them understand how individual performance can contribute to organizational success. Leading organizations have recognized that effective performance management systems create a “line of sight” showing how unit and individual performance can contribute to overall organizational goals and can help them drive internal change and achieve external results. The five selected departments linked individual metrics or competencies to either component- or department-wide goals in the performance plans provided by the departments. The plans included specific links between individual SES competencies or responsibilities and a specific organizational goal. Additionally, OPM’s certification criteria require that agencies align SES individual performance expectations with organizational goals, and OPM has issued relevant guidance. Agencies must establish one or more SES Performance Review Boards (PRB), which is a higher level of review within the SES performance management system. The PRBs review and evaluate the senior executive’s initial summary rating and, if applicable, the executive’s response and a higher level official’s comments on the initial rating. The boards also make written recommendations to the appointing official on annual summary ratings and performance awards. PRBs serve to ensure consistency, stability, and objectivity in performance appraisals. Boards also take into account organizational performance when making recommendations. OPM guidance states that, for agencies seeking access to higher levels of pay through certification, PRBs are to ensure meaningful distinctions in executive performance and that pay increases and performance awards are made based on individual and organizational performance. When appraising a career appointee’s performance or recommending a career appointee for a performance award, more than one-half of the PRB’s members must be SES career appointees. The selected departments varied somewhat in their PRB structures as well as in who provided the final approval of the appraisal decisions; some departments have additional steps or guidance as part of their processes. For example, in addition to the PRB that evaluates ratings, Treasury has a front-end PRB that meets to discuss SES employees’ commitments early in the performance planning cycle. A Treasury official said this helps to ensure consistency in the performance plans as well as the rating and award. Once PRBs have reviewed ratings and awards, they make recommendations to the appointing authority—such as the head of an agency—for final ratings and awards decisions. PRB representatives we interviewed said that awards decisions are based on ratings within individual pay pools. Although the selected departments linked SES performance plans with agency strategic objectives, the departments varied in their requirements that the PRBs compare the performance ratings to the outcomes of department goals and objectives. Most of the selected departments’ PRB representatives said that the PRB explicitly helps to ensure that there is alignment between individual performance and organizational performance by having PRBs consider the organizational assessment— an assessment of the agency’s overall performance—when reviewing proposed ratings and performance awards. For example, Energy provides the PRB with the organizational assessment, as well as guidance on how to use the assessment when reviewing ratings. An Energy official said that both rating officials and PRBs consider the organization’s performance when determining senior executives’ ratings. However, a Treasury official told us that the PRB for some bureaus within Treasury does not have access to an organizational assessment when reviewing ratings. The official said that tracking and reviewing organizational performance is done by the final rating official. All five departments rated the majority of SES in the top two categories, indicating little differentiation between executives in their ratings. As figure 5 shows, the five selected departments gave outstanding ratings to executives ranging from 30.6 percent in DOD to 73.6 percent in DOJ. In 2008, we reported that senior executives for fiscal year 2007 were concentrated in the top two rating levels. As figure 6 shows, at the three departments (DOD, Energy, and Treasury) that we looked at in both fiscal years 2007 and 2013, the proportions of ratings were nearly the same, although Energy had switched from a four-rating system to a five-rating system during that time. Agency officials offered varied explanations for the high concentration of performance ratings at the top two rating levels, ranging from stating that the ratings are justified to stating that the ratings may be too high, but they are reinforced by an agency culture where executives may not view a rating of 3 as acknowledgement of a fully successful performance. For example, human capital officials at DOJ pointed out that the individuals chosen for the SES are already high performers and continue to perform well as SES, earning high ratings. An HHS human capital official, however, noted that competencies are written so that a rating level of 3 represents a fully successful performance, but it is difficult to convince executives who have traditionally received higher ratings that this rating reflects successful performance. Similarly, an Energy official noted that the department communicates the message to rating officials (both verbally and in writing) that a “fully successful” rating is not average or ordinary; it demonstrates a significant level of accomplishment. One of the purposes of the new SES appraisal system is to help ensure standardized ratings with the understanding that a rating level of “outstanding” should always be a difficult goal to reach. The departments had several layers of review, including both component- level and department-level review to ensure consistency across the department. For example, in addition to the PRBs, DOJ has an additional review level (the Senior Executive Resources Board) that analyzes the performance awards and attempts to identify trends and anomalies. However, for fiscal year 2013, four out of five selected departments awarded the same or higher performance awards as a percentage of base salary to SES with lower ratings as was awarded to those SES with higher ratings. While OPM has certified that the selected departments’ appraisal systems make meaningful distinctions based on relative performance, actual awards at some departments do not seem to support that meaningful distinctions are being made. Figure 7 shows the range of performance awards given to eligible SES at the five selected departments for fiscal year 2013. PRB representatives from the selected departments indicated that the variation in performance awards by ratings was caused by a number of different factors. For example, a former PRB Chair said that in fiscal year 2012, they had 9 different pay pools within certain DOD non-combat entities; each was given the latitude to determine how to distribute performance awards within the pay pool. Although the distribution of performance awards across ratings looked inconsistent when aggregated, no one with a lower rating received a larger award than anyone with a higher rating in the same pay pool. A Treasury PRB representative said the range of performance awards is based on ratings as well as several other variables, such as relative contributions to the organization. The Treasury PRB has the flexibility to review those factors when determining performance award amounts, and this sometimes results in an SES with a lower rating getting an equal (or larger) performance award as a percentage of salary than an SES with a higher rating. The PRB representative from DOJ said components were forced to prioritize; when a large percentage of executives are outstanding and only 55 percent can receive a performance award, there are some difficult decisions. DOJ also noted that different executives may get performance awards from year to year, based on their contributions to overall mission achievement. One of the primary purposes for establishing the new SES appraisal system included increasing equity in ratings across agencies and their link to compensation. The new system provides for the uniform administration of SES executive branch performance management systems by promoting consistency, clarity, and transferability of performance standards and ratings across agencies. Additionally, effective performance management systems recognize that merit-based pay increases should make meaningful distinctions in relative performance; this principle is central to the SES performance management system, where under the law, to be certified and thereby able to access the higher levels of pay, the appraisal system must make meaningful distinctions based on relative performance. OPM’s guidelines state that the modal rating should be below “outstanding” and that multiple rating levels should be used. However, OPM’s guidelines also state that if an agency’s modal rating level is “outstanding,” the appraisal system can still be certified if accompanied with a full, acceptable justification. Nonetheless, the continued concentration of senior executives at the top two rating levels indicates that this principle is not being met across government. While making meaningful distinctions in SES performance continues to be a challenge for many agencies, some others have made progress. For example, our 2008 report noted that according to a DOD official, DOD was communicating the message that the SES performance-based pay system recalibrates performance appraisals as a way to help change the culture and to make meaningful distinctions in performance, with a “fully successful” or equivalent rating as a high standard as well as a valued and quality rating. According to DOD, levels above “fully successful” require extraordinary results. Of the five selected departments that we examined in fiscal year 2013, DOD had the lowest percentage of senior executives receiving the highest rating—almost 31 percent. According to OPM officials, in 2015 OPM plans to convene a cross- agency working group that is to revisit the SES certification process. As part of this effort, it will be important for OPM and the working group to consider whether—given the continued high SES performance ratings— the new system is contributing to making meaningful distinctions in performance ratings and awards, and if not, what refinements are needed. The goal of having a uniform system would appear compromised if an “outstanding” rating in one agency does not have the same meaning in another agency. Some options might include revisiting and perhaps eliminating the guideline that allows OPM to certify agencies’ performance management systems with an SES modal rating of “outstanding,” so long as the agency provides acceptable justification. This guideline could serve to work against encouraging agencies to make meaningful distinctions in SES performance. Alternatively, enhancing the transparency of OPM’s approval of agencies’ justification for a modal rating of “outstanding” could shed light on whether the individual agency’s high ratings seem justified. These justifications are not on OPM’s website and OPM does not report them to Congress, nor does the Chief Human Capital Officers Council review them for any consistency. Understandably, it could take more time for OPM’s 2012 efforts to standardize SES performance management to fully materialize. However, data for fiscal year 2013 (both government-wide as well as at our case study departments) showed that a large majority of SES employees are still receiving one of the top two ratings. Coupled with evidence of overlap in performance awards across rating levels, this indicates that the link between performance ratings and awards is not being consistently applied. By convening a cross-agency working group to review the SES certification process, OPM is in a position to evaluate whether the new SES appraisal system actually helps agencies use SES compensation and performance awards in ways that are cost effective and lead to increased employee performance and organizational results. If the performance definitions cannot be consistently applied across the government, creating a uniform framework to communicate expectations and evaluate the performance of SES members will be difficult to attain. As OPM convenes the cross-agency working group, we recommend that the Director of OPM, as the head of the agency that certifies—with OMB concurrence—SES performance appraisal systems, consider the need for refinements to the performance certification guidelines addressing distinctions in performance and pay differentiation. Options could include Revisiting and perhaps eliminating the guideline that allows OPM to certify agencies’ performance management systems with an SES modal rating of “outstanding,” or Strengthening the accountability and transparency of this guideline by Reporting agencies’ justifications for high ratings to OPM on its website. Reporting agencies’ justifications for high ratings to Congress. Obtaining third party input on agencies’ justifications for high ratings, such as by the Chief Human Capital Officers Council. We provided a draft of this report to the Director of OPM and to the Acting Secretary of Defense, the Secretary of Energy, the Secretary of Health and Human Services, the Assistant Attorney General for Administration at the Department of Justice, and the Secretary of the Treasury for review and comment. OPM’s comments are reprinted in appendix IV. OPM also provided technical comments, which we incorporated as appropriate. DOD, Energy, DOJ, and Treasury responded saying they did not have comments on the report. HHS did not respond to our request for comments. In its written comments, OPM generally agreed with the information in the report but did not agree with our recommendation. In disagreeing with our recommendation to consider not certifying agencies with modal ratings of “outstanding,” OPM expressed concerns that imposing such a criterion would lead to arbitrary manipulation of the final ratings rather than an appropriate comparison of performance to standards. OPM asserted that this situation would be ripe for forced distribution of the ratings, which is explicitly prohibited by regulation. OPM also stated that the more appropriate action is to continue emphasizing the importance of setting appropriate, rigorous performance requirements and standards that logically support meaningful distinctions in performance. As recognized in our report, OPM’s regulations contemplate that it is possible to apply standards that make meaningful performance distinctions and to use a range of ratings while avoiding the use of forced distributions. As we also note, since our 2008 report on SES performance management systems—continuing through the career SES performance ratings for fiscal year 2013—questions persist about the extent to which meaningful distinctions based on relative SES performance are being made. Although OPM has emphasized that an “outstanding” rating represents a level of rare, high-quality performance, it appears from examining fiscal year 2013 SES ratings data that some agencies are not appropriately applying these performance standards to their SES ratings. This undercuts one of the primary purposes for establishing the new SES appraisal system, which includes increasing equity in ratings across agencies and their link to compensation. Certifying agencies that are not adhering to the agreed-upon performance standards provides little incentive to those agencies that are adhering to the standards and could lead to ratings that are more skewed toward “outstanding.” As recently as 2008, OPM agreed on the importance of communicating to agencies the value of using a range of rating levels and transforming their cultures to those in which a “fully successful” rating is valued and rewarded. OPM also did not support the second part of our recommendation regarding three suggestions for increasing transparency for those agencies that are certified with a modal rating of “outstanding.” Although we suggested that OPM report high rating justifications to Congress through its Annual Performance Report, we understand that this may not be the most appropriate vehicle to use; another avenue of reporting to Congress would certainly be acceptable, and we have adjusted the text accordingly. In addition, by suggesting that the Chief Human Capital Officers Council have input on agencies’ justifications for high ratings, we were in no way suggesting that this Council role impact OPM’s ultimate authority over the regulation and oversight of the SES performance appraisal system (including certification of agencies’ systems). We maintain, however, that—as an alternative action to more direct enforcement of the performance standards—transparency regarding OPM’s approval of justification for a modal rating of “outstanding” could shed light on whether the individual agency’s high ratings seem justified. Given the recent data on SES performance ratings and awards, we remain concerned that meaningful distinctions in relative SES performance are not being made in a uniform fashion. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretaries of Defense, Energy, Health and Human Services, and the Treasury, to the Assistant Attorney General of Administration at the Department of Justice, and to the Director of the U.S. Office of Personnel Management, as well as to the appropriate congressional committees and other interested parties. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-6806 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members making key contributions to this report are listed in appendix V. This report examines the distribution of performance awards to career Senior Executive Service (SES) employees in executive branch agencies. The objectives of this report were to (1) describe key characteristics of the awards, such as rating and awards distributions, award amounts, and percentage of executives receiving awards, from fiscal years 2010 through 2013, and (2) describe and assess the extent to which selected departments’ SES performance appraisal systems factored in organizational and individual performance and made meaningful distinctions in their fiscal year 2013 performance awards. For an additional perspective, the second objective provides an in-depth view of five selected departments’ SES ratings and performance awards’ processes for the last fiscal year of ratings and award data. For this report, we reviewed applicable legislation and regulations, as well as the Office of Personnel Management (OPM) and the Office of Management and Budget (OMB) guidance and government-wide reports, such as OPM’s annual reports for fiscal years 2010-2013, Report on Senior Executive Pay and Performance. We also interviewed officials at OPM and reviewed applicable reports from non-governmental organizations, such as the Senior Executive Association. We reviewed data from OPM on agency performance rating levels, as well as the number and amount of SES performance awards. We defined our universe of analysis as career senior executives who received ratings. We also excluded SES employees (where identifiable) from agency Inspector General Offices because their inclusion in the data was inconsistent. We reviewed OPM’s SES data for reasonableness and the presence of any obvious or potential errors in accuracy and completeness, and OPM officials confirmed the correctness of the data. On the basis of these procedures, we believe the data are sufficiently reliable for use in the analyses presented in this report. To address our first objective, we reviewed data from OPM on the amount of SES performance awards given to career SES within the 24 Chief Financial Officers (CFO) Act agencies and other variables since fiscal year 2010 (the year prior to the limitation of performance award policies) through fiscal year 2013. We analyzed aggregate SES basic pay and performance ratings as provided by OPM for fiscal years 2010 through 2013. In calculating the percentage of eligible senior executives who received performance awards, we excluded executives who did not receive a performance rating. To address the second objective, we selected five case study departments from the CFO Act agencies based on several criteria. Using 2012 Enterprise Human Resources Integration data, we identified departments that had the largest number of SES employees and varying performance award distributions, including departments with both a large and small percent of SES employees who received an award, and we identified departments that had both a small and large range of award amounts, as both dollars and percent of salary. Selected case studies included the Departments of Defense (DOD), Energy, Health and Human Services (HHS), Justice (DOJ), and Treasury. We reviewed these departments’ SES performance appraisal systems and their most recent certification documentation submitted to OPM. We also examined whether individual SES performance metrics were linked to agency performance and whether performance awards were tied to outcomes by looking at examples of individual SES performance appraisals. We did not verify the validity of the measures used or the performance of the senior executives. We also interviewed agency officials from the selected departments with responsibility for their department’s SES program as well as selected members or representatives of the Performance Review Boards charged with ensuring consistency, stability, and objectivity in SES performance appraisals. Additionally, we analyzed OPM data for each case study department to identify the percentage of ratings distributed across each rating category in fiscal year 2013. We compared results of three of the case study departments that were also reviewed in 2007, using data from a previous report. We also analyzed the data to determine the amount of performance awards—as a percentage of salary—given to SES employees for each performance rating level. We conducted this performance audit from June 2014 to December 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. OPM Report on the (Agency Name) Senior Executive Service (SES) Performance Appraisal System Certification Date of Report: MM/DD/YYYY OPM (Enter Decision Here) certification of the (Agency Name) SES performance appraisal system. The table below indicates the certification criteria fully met, minimally met, or not met during OPM’s review of the system. Description of “meets” criteria The agency is using the basic SES appraisal system description and it has been approved by OPM. (This means the system includes all the required system language, and the performance plans include all the required language for Consultation, Employee Perspective, Customer Perspective, and Accountability.) All performance plans include specific organizational goals in the Strategic Alignment cell for each performance requirement under the Results Driven element in Part 5 of the appraisal form. Each performance requirement under the Results Driven element for each performance plan contains adequate measurable results. If the agency also includes a list of activities in the plans, the plan clearly identifies the measurable result(s) and denotes that it is the results that are to be rated for the Results Driven element. Description of “meets” criteria The agency has provided a copy of its memo issuing guidelines to executives, rating officials and PRB members that includes the results of the organizational assessment, and guidelines for how to use the results when determining ratings, pay, and awards; OR, for small agencies, a description for how the guidelines and results of the organizational assessment were given to rating officials and PRB members and the content of the guidelines (e.g., if communicated by email, a copy of the email is included). The agency has provided a description of training or communications given that should include a briefing on the agency’s SES performance management system, OR the agency provides evidence it has conducted the training, but cannot verify how many executives attended the training, OR the system has not yet been implemented and the agency provides training plans; AND communication of the average rating, pay, and awards given the previous year. The SES rating distribution indicates the agency is clearly making distinctions in performance (i.e. the modal rating is below “outstanding”) AND the distribution appears to reflect organizational performance as explained by the agency, OR, the modal rating is “outstanding” but the agency gave a full, acceptable justification; AND the percent of SES members not rated is less than 5 percent. Differentiation in Pay Based on Performance a) The agency's correlation coefficient of the rating and performance compensation (that is, pay adjustments and awards) is 0.500 or more, OR the pay and awards data show the agency makes distinctions in pay b) AND the average performance compensation is higher for executives rated Outstanding than for those rated Exceeds, and Exceeds is higher than Fully Successful; c) AND the data does not include any violations of pay and awards limits. Description of “meets” criteria Pay Policy The agency provides a written, official pay policy and the policy describes clear differentiations in performance compensation (that is, pay adjustments and awards) based on the annual summary rating. Office of Personnel Management (OPM), Small Business Administration (SBA), and Social Security Administration (SSA). Robert Goldenkoff, (202) 512-6806 or [email protected]. In addition to the contact named above, Thomas Gilbert, Assistant Director, and Judith Kordahl, Analyst-in-Charge, supervised the development of this report. Charles Culverwell and Mary Diop made significant contributions to all aspects of this report. Other important contributors included Sara Daleski, Karin Fangman, Donna Miller, and Rebecca Shea.
The career SES, a cadre of senior leaders, has a pay-for-performance compensation system, which includes annual cash performance awards. OPM has a key leadership and oversight role in the implementation of the SES pay-for-performance system, including the certification of SES performance appraisal systems. GAO was asked to examine SES performance awards. Specifically, this report (1) describes key characteristics of executive branch agency ratings and performance awards for fiscal years 2010 through 2013, and (2) provides a more in-depth look at five departments' fiscal year 2013 ratings and awards. GAO analyzed data from OPM on the 24 CFO Act agencies for fiscal years 2010 through 2013. GAO also selected five case study departments: Defense, Energy, Health and Human Services, Justice, and Treasury and examined how they factored organizational and individual performance into their fiscal year 2013 SES performance awards. In 2012, the Office of Personnel Management (OPM) facilitated development of a new Senior Executive Service (SES) performance appraisal system with a more uniform framework to communicate expectations and evaluate the performance of executive branch agency SES members. The new system is expected to promote consistency, clarity, and transferability of performance standards and ratings across agencies. To obtain SES appraisal system certification for agencies seeking access to higher levels of pay, agencies are required to make meaningful distinctions based on the relative performance of their executives as measured through the performance and pay criteria. Further, if the modal rating is at the highest level of “outstanding,” agencies must provide an acceptable justification to OPM for the high level. (The modal rating is the rating level assigned most frequently among the actual ratings.) More than 85 percent of career Chief Financial Officers (CFO) Act agency SES were rated in the top two of five categories for fiscal years 2010 through 2013, and career SES received approximately $42 million in awards for fiscal year 2013. The average award amount was higher for executives with higher ratings. In a closer examination of five departments for fiscal year 2013, GAO found that they used or planned to use OPM's new SES performance system. The departments also had performance plans with links between individual SES responsibilities and organizational goals. Similar to the government-wide results, departments rated SES primarily in the top two categories. Four out of five departments awarded the same or higher performance awards to some SES with lower ratings. Department officials gave several reasons for giving lower-rated SES higher performance awards, including that they considered relative contributions, and that the awards were consistent within subcomponents of the department. OPM plans to convene a cross-agency working group in 2015 to revisit the SES certification process. It will be important for OPM and the working group to consider whether, given the continued high SES performance ratings, the new SES appraisal system is contributing to making meaningful distinctions in performance ratings and awards without creating forced distributions, and if not, what refinements are needed. GAO recommends that the Director of OPM consider various refinements to better ensure the SES performance appraisal system certification guidelines promote making meaningful distinctions in performance. Options could include not certifying appraisal systems where the modal rating is “outstanding.” OPM disagreed with the recommendation stating that, among other things, it could result in forced distributions in ratings. GAO maintains that additional action should be considered to ensure equity in ratings and performance awards across departments.
Advances in the use of IT and the Internet are continuing to change the way that federal agencies communicate, use, and disseminate information; deliver services; and conduct business. For example, electronic government (e-government) has the potential to help build better relationships between government and the public by facilitating timely and efficient interaction with citizens. To help agencies more effectively manage IT, the Congress has established a statutory framework of requirements and roles and responsibilities relating to information and technology management. In particular, the Paperwork Reduction Act of 1995 and the Clinger-Cohen Act of 1996 require agency heads, acting through agency CIOs to, among other things, better link their IT planning and investment decisions to program missions develop and maintain a strategic information resources management (IRM) plan that describes how IRM activities help to accomplish agency missions; develop and maintain an ongoing process to establish goals for improving IRM’s contribution to program productivity, efficiency, and effectiveness; methods for measuring progress toward these goals; and clear roles and responsibilities for achieving these goals; develop and implement a sound IT architecture; implement and enforce IT management policies, procedures, standards, and guidelines; establish policies and procedures for ensuring that IT systems provide reliable, consistent, and timely financial or program performance data; and implement and enforce applicable policies, procedures, standards, and guidelines on privacy, security, disclosure, and information sharing. Nevertheless, the agencies face significant challenges in effectively planning for and managing their IT. Such challenges can be overcome through the use of a systematic and robust management approach that addresses critical elements such as IT strategic planning and investment management. Federal agencies did not always have in place important practices associated with IT laws, policies, and guidance related to strategic planning/performance measurement and investment management (see fig. 1). A well-defined strategic planning process helps to ensure that an agency’s IT goals are aligned with its strategic goals. Moreover, establishing performance measures and monitoring actual-versus- expected performance using those measures can help to determine whether IT is making a difference in improving performance. Finally, an IT investment management process is an integrated approach to managing investments that provides for the continuous identification, selection, control, life-cycle management, and evaluation of IT investments. Agency IT officials could not always identify why practices were not in place, but in those instances in which reasons were identified, a variety of explanations were provided; for example, that the CIO position had been vacant, that not including a requirement in the agency’s guidance was an oversight, or that the process was being revised. Nevertheless, these practices are based on law, executive orders, Office of Management and Budget (OMB) policies, and our guidance, and are also important ingredients in ensuring effective strategic planning, performance measurement, and investment management that, in turn, make it more likely that the billions of dollars in government IT investments will be wisely spent. Critical aspects of the strategic planning/performance measurement area include documenting the agency’s IT strategic planning processes, developing IRM plans, establishing goals, and measuring performance to evaluate whether goals are being met. Although the agencies often had these practices, or elements of these practices, in place, additional work remains, as demonstrated by the following examples: Strategic planning process. Strategic planning defines what an organization seeks to accomplish and identifies the strategies it will use to achieve desired results. A defined strategic planning process allows an agency to clearly articulate its strategic direction and to establish linkages among planning elements such as goals, objectives, and strategies. About half of the agencies had fully documented their strategic planning processes. Such processes are an essential foundation for ensuring that IT resources are effectively managed. Strategic IRM plans. The Paperwork Reduction Act requires that agencies indicate in strategic IRM plans how they are applying information resources to improve the productivity, efficiency, and effectiveness of government programs. An important element of a strategic plan is that it presents an integrated system of high-level decisions that are reached through a formal, visible process. The Paperwork Reduction Act also requires agencies to develop IRM plans in accordance with OMB’s guidance. However, OMB does not provide cohesive guidance on the specific contents of IRM plans. Accordingly, although agencies generally provided OMB with a variety of planning documents to meet its requirement that they submit an IRM plan, these plans were generally limited to IT strategic or e-government issues and did not address other elements of IRM, as defined by the Paperwork Reduction Act. In particular, these plans generally include individual IT projects and initiatives, security, and enterprise architecture elements but do not often address other information functions—such as information collection, records management, and privacy—or the coordinated management of all information functions. OMB IT staff agreed that the agency has not set forth guidance on the contents of agency IRM plans in a single place, stating that its focus has been on looking at agencies’ cumulative results and not on planning documents. These staff also noted that agencies account for their IRM activities through multiple documents (e.g., Information Collection Budgets and Government Paperwork Elimination Act plans). Nevertheless, half the agencies indicated a need for OMB to provide additional guidance on the development and content of IRM plans. Accordingly, we recommended that OMB develop and disseminate to agencies guidance on developing IRM plans. IT goals. The Paperwork Reduction Act and the Clinger-Cohen Act require agencies to establish goals that address how IT contributes to program productivity, efficiency, effectiveness, and service delivery to the public. We have previously reported that leading organizations define specific goals, objectives, and measures, use a diversity of measure types, and describe how IT outputs and outcomes impact operational customer and agency program delivery requirements. The agencies generally had the types of goals outlined in the Paperwork Reduction Act and the Clinger- Cohen Act. However, five agencies did not have one or more of the goals required by the Paperwork Reduction Act and the Clinger-Cohen Act. It is important that agencies specify clear goals and objectives to set the focus and direction for IT performance. IT performance measures. The Paperwork Reduction Act, the Clinger- Cohen Act, and an executive order require agencies to establish a variety of IT performance measures—such as those related to how IT contributes to program productivity, efficiency, and effectiveness—and to monitor the actual-versus-expected performance using those measures. Although the agencies largely had one or more of the required performance measures in place, these measures were not always linked to the agencies’ enterprisewide IT goals. Moreover, few agencies monitored actual-versus- expected performance for all of their enterprisewide IT goals. Specifically, although some agencies tracked actual-versus-expected outcomes for the IT performance measures in their performance plans or accountability reports and/or for specific IT projects, they generally did not track the performance measures that were specified in their IRM plans. As we have previously reported, an effective IT performance management system offers a variety of benefits, including serving as an early warning indicator of problems and the effectiveness of corrective actions; providing input to resource allocation and planning; and providing periodic feedback to employees, customers, stakeholders, and the general public about the quality, quantity, cost, and timeliness of products and services. Moreover, without enterprisewide performance measures that are tracked against actual results, agencies lack critical information about whether their overall IT activities are achieving expected goals. Benchmarking. The Clinger-Cohen Act requires agencies to quantitatively benchmark agency process performance against public- and private-sector organizations, where comparable processes and organizations exist. Benchmarking is used because there may be external organizations that have more innovative or more efficient processes than their own processes. Seven agencies in our review had mechanisms in place—such as policies and strategies—related to benchmarking their IT processes. In general, however, agencies’ benchmarking decisions were ad hoc. Few agencies had developed a mechanism to identify comparable external private- or public-sector organizations and processes and/or had policies related to benchmarking, although all but 10 of the agencies provided examples of benchmarking that they had performed. Our previous study of IT performance measurement at leading organizations found that they had spent considerable time and effort comparing their performance information with that of other organizations. Agency IT officials could not identify why strategic planning/performance measurement practices were not in place in all cases, but in those instances in which reasons were identified, a variety of explanations were provided. For example, reasons cited by agency IT officials included that they lacked the support from agency leadership, that the agency had not been developing IRM plans until recently and recognized that the plan needed further refinement, that the process was being revised, and that requirements were evolving. Without strong strategic management practices, it is less likely that IT is being used to maximize improvement in mission performance. Moreover, without enterprisewide performance measures that are being tracked against actual results, agencies lack critical information about whether their overall IT activities, at a governmentwide cost of billions of dollars annually, are achieving expected goals. Critical aspects of IT investment management include developing well- supported proposals, establishing investment management boards, and selecting and controlling IT investments. The agencies’ use of practices associated with these aspects of investment management was wide- ranging, as follows: IT investment proposals. Various legislative requirements, an executive order, and OMB policies provide minimum standards that govern agencies’ consideration of IT investments. In addition, we have issued guidance to agencies for selecting, controlling, and evaluating IT investments. Such processes help ensure, for example, that investments are cost-beneficial and meet mission needs and that the most appropriate development or acquisition approach is chosen. The agencies in our review had mixed results when evaluated against these various criteria. For example, the agencies almost always required that proposed investments demonstrate that they support the agency’s business needs, are cost-beneficial, address security issues, and consider alternatives. However, they were not as likely to have fully in place the Clinger-Cohen Act requirement that agencies follow, to the maximum extent practicable, a modular, or incremental, approach when investing in IT projects. Incremental investment helps to mitigate the risks inherent in large IT acquisitions/developments by breaking apart a single large project into smaller, independently useful components with known and defined relationships and dependencies. Investment management boards. Our investment management guide states that establishing one or more IT investment board(s) is a key component of the investment management process. Such executive-level boards, made up of business-unit executives, concentrate management’s attention on assessing and managing risks and regulating the trade-offs between continuing to fund existing operations and developing new performance capabilities. Almost all of the agencies in our review had one or more enterprise-level investment management board. However, the investment management boards for six agencies were not involved, or the agency did not document the boards’ involvement, in the control phase. Maintaining responsibility for oversight with the same body that selected the investment is crucial to fostering a culture of accountability by holding the investment board that initially selected an investment responsible for its ongoing success. Selection of IT investments. During the selection phase of an IT investment management process, the organization (1) selects projects that will best support its mission needs and (2) identifies and analyzes each project’s risks and returns before committing significant funds. To achieve desired results, it is important that agencies have a selection process that, for example, uses selection criteria to choose the IT investments that best support the organization’s mission and that prioritizes proposals. Twenty- two agencies used selection criteria in choosing their IT investments. In addition, about half the agencies used scoring models to help choose their investments. Control over IT investments. During the control phase of the IT investment management process, the organization ensures that, as projects develop and as funds are spent, the project is continuing to meet mission needs at the expected levels of cost and risk. If the project is not meeting expectations or if problems have arisen, steps are quickly taken to address the deficiencies. In general, the agencies were weaker in the practices pertaining to the control phase of the investment management process than to the selection phase and no agency had the practices associated with the control phase fully in place. In particular, the agencies did not always have important mechanisms in place for agencywide investment management boards to effectively control investments, including decision-making rules for project oversight, early warning mechanisms, and/or requirements that corrective actions for under- performing projects be agreed upon and tracked. Executive level oversight of project-level management activities provides an organization with increased assurance that each investment will achieve the desired cost, benefit, and schedule results. Among the variety of reasons that agencies cited for not having IT investment management practices fully in place were that the CIO position had been vacant, that not including a requirement in the IT investment management guide was an oversight, and that the process was being revised. However, in some cases agencies could not identify why certain practices were not in place. It is important that agencies address their shortcomings, because only by effectively and efficiently managing their IT resources through a robust investment management process can they gain opportunities to make better allocation decisions among many investment alternatives and to further leverage their IT investments. To help agencies improve their IT strategic planning/performance measurement and investment management, we have made numerous recommendations to agencies and issued guidance. Specifically, in our January 2004 report we made recommendations to the 26 agencies in our review regarding practices that were not fully in place. These recommendations addressed issues such as IT strategic planning; establishing and linking enterprisewide goals and performance measures and tracking progress against these measures; and selecting, controlling, and evaluating investments. By implementing these recommendations, agencies can better ensure that they are using strategic planning, performance measurement, and investment management practices that are consistent with IT legislation, executive orders, OMB policies, and our guidance. Another mechanism that agencies can use to improve their IT management is to apply the management frameworks and guides that we have issued, which are based on our research into IT management best practices and our evaluations of agency IT management performance. In this vein, today we are releasing the latest version of our ITIM framework. This framework identifies and organizes critical processes for selecting, controlling, and evaluating IT investments into a framework of increasingly mature stages (see fig. 2). First issued as an exposure draft in May 2000, this new version of the ITIM includes lessons learned from our use of the framework in our agency reviews and from lessons conveyed to us by users of the framework. In addition, in order to validate the appropriateness of our changes and to gain the advantage of their experience, we had the new version reviewed by several outside experts who are familiar with the ITIM exposure draft and with investment management in a broad array of public and private organizations. ITIM can be used to analyze an organization’s investment management processes and to determine its level of maturity. The framework is useful to many federal agencies because it provides: (1) a rigorous, standardized tool for internal and external evaluations of an agency’s IT investment management process; (2) a consistent and understandable mechanism for reporting the results of these assessments to agency executives, Congress, and other interested parties; and (3) a road map that agencies can use for improving their investment management processes. Regarding the first two points, we and selected agency Inspectors General have used the ITIM to evaluate and report on the investment management processes of several agencies. Concerning the third point, a number of agencies have recognized the usefulness of the ITIM framework and have used it to develop and enhance their investment management strategies. For example, one agency uses the framework to periodically review its IT investment management capabilities and has developed an action plan to move through the stages of maturity. In summary, our January 2004 report indicates that the federal government can significantly improve its IT strategic planning, performance measurement, and investment management. Such improvement would better ensure that agencies are being responsible stewards of the billions of dollars for IT with which they have been entrusted, by helping them to invest these monies wisely. This can be accomplished, in part, through the expeditious implementation of our recommendations and the adoption of best practices, which we have incorporated into our IT management frameworks and guides such as the ITIM. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions that you or other Members of the Subcommittee may have at this time. If you have any questions regarding this statement, please contact me at (202) 512-9286 or by e-mail at [email protected]. Specific questions related to our January 2004 report may also be directed to Linda Lambert at (202) 512-9556 or via e-mail at [email protected] or Mark Shaw at (202) 512-6251 or via e-mail at [email protected]. Questions related to the ITIM framework can be directed to Lester Diamond at (202) 512-7957 or via e- mail at [email protected]. Table 1 describes the 12 IT strategic planning/performance measurement and the 18 IT investment management practices that we used in our January 2004 report on the government’s performance in these areas. We identified these 30 practices after reviewing major legislative requirements (e.g., the Paperwork Reduction Act of 1995 and the Clinger-Cohen Act of 1996), executive orders, Office of Management and Budget policies, and our own guidance.
The federal government spends billions of dollars annually on information technology (IT) investments that are critical to the effective implementation of major government programs. To help agencies effectively manage their substantial IT investments, the Congress has established a statutory framework of requirements and roles and responsibilities relating to information and technology management, that addresses, for example, (1) IT strategic planning/performance measurement (which defines what an organization seeks to accomplish, identifies the strategies it will use to achieve desired results, and then determines how well it is succeeding in reaching resultsoriented goals and achieving objectives) and (2) IT investment management (which involves selecting, controlling, and evaluating investments). GAO was asked to summarize its January 2004 report on IT strategic planning/performance measurement and investment management (Information Technology Management: Governmentwide Strategic Planning, Performance Measurement, and Investment Management Can Be Further Improved, GAO-04-49 , January 12, 2004) and to discuss how agencies can improve their performance in these areas. GAO recently reported that the use of important IT strategic planning/performance measurement and investment management practices by 26 major federal agencies was mixed. For example, agencies generally had IT strategic plans and goals, but these goals were not always linked to specific performance measures that were tracked. Agencies also largely had IT investment management boards, but no agency had the practices associated with the oversight of IT investments fully in place. Although they could not always provide an explanation, agencies cited a variety of reasons for not having practices fully in place, including that the chief information officer position had been vacant and that the process was being revised. By improving their IT strategic planning, performance measurement, and investment management, agencies can better ensure that they are being responsible stewards of the billions of dollars for IT that they have been entrusted with through the wise investment of these monies. To help agencies improve in these areas, GAO has made numerous recommendations to agencies and issued guidance. For example, in the January 2004 report, GAO made recommendations to the 26 agencies regarding practices that were not fully in place. In addition, today GAO is releasing the latest version of its Information Technology Investment Management (ITIM) framework, which identifies critical processes for selecting, controlling, and evaluating IT investments and organizes them into a framework of increasingly mature stages; thereby providing agencies a road map for improving IT investment management processes in a systematic and organized manner.
The Fair Housing Act, title VIII of the Civil Rights Act of 1968, prohibited discrimination in the sale, rental, and financing of housing based on race, color, religion, or national origin. The act allowed the Department of Housing and Urban Development (HUD) to investigate and conciliate complaints of housing discrimination and authorized the Department of Justice to file suits in cases of a pattern or practice of discrimination or in cases of public importance. HUD was not given any authority to administratively remedy acts of discrimination against an individual, however. The Fair Housing Act also required HUD to refer housing discrimination complaints to state and local agencies where the state or local law provided rights and remedies substantially equivalent to those provided by the federal law. In 1980, HUD established the Fair Housing Assistance Program to provide financial assistance to state and local agencies to encourage them to assume a greater share of the enforcement of their fair housing laws. The Fair Housing Initiatives Program (FHIP), administered by HUD, is designed to provide a coordinated and comprehensive approach to fair housing activities in order to strengthen enforcement of the Fair Housing Act. During the 1986 Senate hearings on its proposal to establish the FHIP, HUD testified that enforcement activity, particularly testing, by private nonprofit and other private entities would be the principal focus and motivation of the program. In February 1988, the program was created as a 2-year demonstration program by the Housing and Community Development Act of 1987. About 7 months later, the Fair Housing Amendments Act of 1988 was signed into law, and it became effective in March 1989. The 1988 act attempted to remedy the enforcement shortcomings of the original legislation. It significantly strengthened federal fair housing enforcement by, among other things, establishing an administrative enforcement mechanism, allowing HUD to pursue cases filed by an individual before an administrative law judge for disposition and providing for civil penalties. In November 1990, FHIP was extended for 2 additional years, and with the enactment of the Housing and Community Development Act of 1992, it became a permanent program, effective fiscal year 1993. The 1992 act also expanded the program to reflect significant legislative changes in fair housing and lending that had taken place after the program’s creation in 1988. It authorized FHIP to implement testing programs whenever there was a reasonable basis for doing so; establish new fair housing organizations or expand the capacity of existing ones; conduct special projects to, for example, respond to new or sophisticated forms of housing discrimination; undertake larger, long-term enforcement activities through multiyear funding agreements; and pay for litigation. For fiscal years 1989 through 1997, the Congress appropriated $113 million for FHIP. The permanent program grew from an appropriation of $10.6 million in fiscal year 1993 to $26 million in fiscal year 1995 (see fig. 1). Funds for the program are distributed on the basis of competitive grants through four program initiatives. These initiatives or funding categories generally define who is eligible to receive funds and/or the focus of activities to be funded. The initiatives are (1) the private enforcement initiative—funding for private nonprofit organizations to undertake testing and other enforcement-related activities; (2) the fair housing organizations initiative—funding for private nonprofit organizations to create new fair housing enforcement organizations in those areas of the country that were unserved or underserved by such organizations or expand the capacity of existing private nonprofit fair housing organizations; (3) the education and outreach initiative—funding for private and public entities to educate the general public and housing industry groups about fair housing rights and responsibilities; and (4) the administrative enforcement initiative—funding for state and local government agencies that administer fair housing laws certified by HUD as substantially equivalent to federal law to help such agencies broaden their range of enforcement and compliance activities. Private organizations that receive grants generally are nonprofit entities and have experience in investigating complaints, testing for fair housing violations, and enforcing legal claims or outcomes. The program provides considerable flexibility in the types of activities that can be funded under each initiative. Eligible activities include education and outreach programs, testing based on complaints and other reasonable bases, the recruitment of testers and attorneys, special projects to respond to new or sophisticated forms of discrimination, litigation expenses, and the creation of new fair housing organizations in areas of the country underserved by fair housing enforcement organizations. The program is restricted from funding two types of activities: (1) settlements, judgments, or court orders in any litigation action involving HUD or HUD-funded housing providers and (2) expenses associated with litigation against the federal government. Appendix I provides additional details on the types of activities eligible for funding under the program. FHIP is an integral part of HUD’s fair housing enforcement and education efforts that are concentrated within the Office of Fair Housing and Equal Opportunity. In addition to FHIP, this office is responsible for the oversight of the Fair Housing Assistance Program, investigation and processing of fair housing complaints, and referral of complaints to Justice when appropriate. FHIP links and extends fair housing enforcement and education and outreach activities to many state and local governments and communities across the country. The program makes it possible for HUD to look comprehensively at fair housing problems and to work with the whole spectrum of agencies that are involved in fighting housing discrimination. Taken together, FHIP and the Fair Housing Assistance Program, form a national fair housing strategy through greater cooperation between the private and public sectors. In fiscal year 1996, FHIP accounted for about 22 percent of the Office of Fair Housing and Equal Opportunity’s $76.3 million budget (see fig. 2). HUD uses discretion in deciding how FHIP funds are allocated among the four program initiatives. Reflecting the program’s principal focus, HUD’s budget requests to the Congress set forth how it plans to divide the total amount of dollars requested for FHIP among the four initiatives. Notices of funding availability in the Federal Register indicate the dollar amounts HUD makes available for competition under each program initiative. According to the Acting FHIP Division Director, the Assistant Secretary for Fair Housing and Equal Opportunity determines how funds are allocated on the basis of legislation, administration and agency priorities, and input from the housing industry and fair housing groups. HUD’s allocations for FHIP have consistently reflected that enforcement activities are the principal focus of the program. In annual budget justifications to the Congress, HUD discusses its emphasis for the year and indicates how much of FHIP’s total budget request it plans to allocate to each FHIP initiative. Table II.1 in appendix II shows by fiscal year the dollar amounts HUD anticipated it would allocate to each initiative. The Congress has appropriated amounts equal to or greater than the amounts HUD requested each fiscal year until 1996. In accordance with its budget plans, HUD has made the largest portion of FHIP dollars available for the private enforcement initiative. In 2 fiscal years (1993 and 1994) in which HUD received appropriated amounts higher than its budget requests, the additional dollars available resulted in the private enforcement initiative’s receiving significantly more money than initially planned. Overall, HUD made about 48 percent of FHIP funds available for the private enforcement initiative (see table II.2). The relationship between HUD’s proposed allocations for each initiative and the funds made available indicates that the dollar amounts were basically the same in 4 of the 8 years (fiscal years 1989 through 1991 and 1995). For the remaining years, allocations varied considerably from HUD’s initial budget plans primarily because appropriated amounts for FHIP overall were either higher or lower than the budget requests. The variations were as follows: In fiscal year 1992, the amount appropriated for FHIP was the same as the budget request. The private enforcement initiative’s allocation was $1.3 million less than HUD initially anticipated; the administrative enforcement initiative’s was $0.9 million more, and the education and outreach initiative’s was $0.4 million more. In fiscal year 1993, FHIP’s appropriation was $3 million higher than the budget request. The private enforcement initiative’s allocation was $1 million more; the education and outreach initiative’s, $0.5 million more. The fair housing organizations initiative, which was authorized in late 1992, received a $2.6 million allocation. The administrative enforcement initiative’s allocation was $1.1 million less than anticipated, however. In fiscal year 1994, FHIP’s appropriation was $3.6 million higher than the budget request. Of this, HUD allocated $3 million to the private enforcement initiative and $0.6 million to the fair housing organizations initiative. In fiscal year 1996, FHIP’s appropriation was 43 percent lower than the budget request. While the budget request included funds for all initiatives, owing to the reduced appropriation, HUD did not allocate any funds to the administrative enforcement initiative. Allocations to the other three initiatives ranged from 30 to 120 percent of the amount initially requested. From fiscal year 1989 through fiscal year 1996, HUD received 2,090 applications for FHIP grants and approved about one-quarter of these applications for funding. Historically, the demand for education and outreach grants has exceeded that for the other three initiatives each fiscal year except for 1996. For the 3 most recent years (fiscal years 1994 through 1996), the greatest demand, as measured by the amounts requested on applications, has been for the private enforcement initiative. In fiscal year 1996, the number of applications for grants decreased from 300 in each of the 3 previous fiscal years to 91. The most significant decrease was for education and outreach grants, dropping to 19 applications from over 200 the prior year (see table II.3). HUD told us that the significant drop in education and outreach applications is primarily attributable to language in the 1996 appropriations law requiring applicants to meet the definition of a qualified fair housing enforcement organization in order to be eligible for FHIP funds. According to FHIP legislation, a qualified fair housing enforcement organization is a private nonprofit organization that has at least 2 years of experience in complaint intake, complaint investigations, testing, and enforcement of legal claims. HUD told us that the legislative requirement precluded many previously eligible organizations from applying for an education and outreach initiative grant. Also, according to HUD, a one-third reduction in FHIP’s appropriation for that fiscal year discouraged many organizations from applying for FHIP funding. On the basis of the dollar value of grant applications submitted to HUD, the greatest demand has been for private enforcement initiative grants. Our analysis of the dollar amount of applications is based on fiscal years 1994 through 1996 for which complete information is readily available (see table II.4). Of the total $175 million in applications received for the 3-year period, $76 million, or about 43 percent, was for private enforcement initiative grants, and about 36 percent was for education and outreach initiative grants. From the program’s inception through September 1996, a total of 220 different organizations received FHIP grants in 44 states and the District of Columbia; 26 organizations received about half of all FHIP funds awarded. The organizations are located in 15 states and the District of Columbia. FHIP-funded activities have reflected the program’s purpose as described in the legislation. That is, grantees have used FHIP dollars to fund the kinds of activities intended, namely, implementing fair housing testing programs and testing-related activities; establishing new fair housing organizations; and educating the public and housing providers about fair housing requirements. Through fiscal year 1996, HUD awarded 483 grants totaling $86 million to support fair housing enforcement and education. Of the 220 different organizations that received grants, 26 received about half of the funds awarded. These 26 organizations, located in 15 states and the District of Columbia, received 179 of the 483 grants. They include state governments; national membership organizations; legal aid organizations; and civil rights and advocacy groups. Some have grants that are national in scope, and some are involved in establishing new fair housing organizations in states that were unserved or underserved by fair housing enforcement organizations. Also, some organizations represent all protected classes, while others focus on a specific target population, such as persons with disabilities. Table 1 identifies the 26 organizations and the number and dollar value of grants received through fiscal year 1996. (See app. III for a complete list of the grants awarded and the dollar amount of each.) Many of these organizations received grants in consecutive years as well as grants under more than one FHIP initiative. For example, the National Fair Housing Alliance received at least one grant during each fiscal year of FHIP funding, including two education and outreach grants from 1991 funds, two private enforcement grants and one fair housing organizations grant from 1994 funds, and a fair housing organizations and an education and outreach grant from 1995 funds. The Metropolitan Milwaukee Fair Housing Council also received one grant each fiscal year and two grants in each of two fiscal years—a private enforcement grant and an education and outreach grant in 1990 and two private enforcement grants in 1994. The Open Housing Center, Inc., received three grants in 1994 and two in 1995, but none in 1993. Some of the 26 organizations received grants that were awarded for multiyear projects, and these grants were generally much larger than single-year grants. FHIP grant awards reflect the program’s emphasis on private enforcement-related activities. From fiscal year 1989 through 1996, the largest percentage of FHIP dollars funded activities under the private enforcement initiative—$40.5 million, or 47 percent. Another $15.8 million, or 18 percent, was awarded for the fair housing organizations initiative (see fig. 3). Overall, FHIP-funded activities consist predominately of testing (complaint-based, systemic, or both) and other enforcement-related activities. Under the private enforcement initiative, in particular, funded activities include, among others, testing to confirm allegations of discrimination in the rental and sale of property, litigating cases, organizing new fair housing offices, and developing computer databases on complaints. Seventy-nine different organizations received 202 private enforcement initiative grants ranging from $10,000 to $1 million and averaging about $200,500. Of the 202 grants we reviewed, 181 were funded to carry out testing and testing-related activities. The remaining 21 grants were funded to engage in other enforcement-related activities, such as litigating cases; recruiting and/or training attorneys; developing fair housing databases; establishing a statewide attorney network to handle complaints from member offices; and training volunteers and community residents. In addition, private enforcement initiative grants funded special projects that focus on high-priority issues such as mortgage lending discrimination and insurance redlining. Included among those awards was a fiscal year 1992 grant for $1 million to support a large-scale national testing program to assess mortgage lending discrimination. Information obtained from FHIP-funded projects can be used by either public or private nonprofit organizations, or HUD, as the basis for a formal complaint against individuals or lending institutions. Several FHIP-funded projects involving testing mortgage lenders and insurance companies were completed in 1995, and as a result, complaints have been filed with HUD against three of the largest home insurance companies and five of the largest independent mortgage companies in the country. Under FHIP’s fair housing organizations initiative, 47 different groups received 56 grants ranging from $30,000 to $1,859,000 and averaging about $282,500. While organizations with grants under the fair housing organizations initiative may engage in many of the same activities as the private enforcement initiative grantees, the fair housing organizations initiative was established to create new fair housing enforcement organizations in those areas of the country that were unserved or underserved by these organizations or expand the capacity of existing private nonprofit fair housing organizations. Of the 56 fair housing organizations initiative grants, 19 were used to establish new organizations. According to HUD, some grants funded more than one new fair housing organization, and in total, 23 new organizations have been established with FHIP grants. The new organizations are located primarily in the southern and western United States—areas historically underserved by fair housing enforcement programs, according to HUD. Fair housing organizations initiative grantees were also funded to recruit and train testers, implement testing programs, and conduct community outreach to inform the public about the services provided by newly established fair housing organizations. One hundred and twenty-eight different organizations received 188 education and outreach initiative grants ranging from $6,500 to $1,182,900 and averaging about $119,300. A wide range of activities were funded to provide education and outreach under this initiative’s three components—national, regional and local, and community-based. Overall, the principal activities for the 188 education and outreach grants were developing pamphlets and brochures; preparing print, television, and radio advertisements; producing video and audio tapes; and providing conferences and seminars for other interested parties, including the housing industry, consumers, and community organizations. Twenty-two different organizations received 37 administrative enforcement initiative grants ranging from $55,300 to $439,300 and averaging about $197,200. About two-thirds of those grants funded at least one type of testing, that is, complaint-based or systemic. Other FHIP-funded activities include staff training, community training, tester recruitment, and conciliation/settlement activities. To determine whether grantees used FHIP funds to sue the government, we asked HUD’s Office of General Counsel to identify FHIP grantees involved in litigation with the government. The General Counsel identified 10 cases involving 7 grantees who had filed lawsuits against the government since the inception of the program. Of the 10 lawsuits, 4 (involving 3 grantees) were filed and resolved before a FHIP grant was awarded to the fair housing organization. For the remaining six lawsuits (involving four grantees), pro bono legal services or other resources were used to pursue the cases against the U.S. government, according to HUD. HUD has generally been satisfied with grantees’ use of funds. During the grant performance period and before closing out a grant, HUD reviews quarterly reports and products provided by the grantee to ensure that the organization’s performance is consistent with the grant agreement. At the end of the grant period and after receipt of the final performance reports and products, HUD completes a closeout review. For this final assessment, HUD determines whether the grantee performed all grant requirements, indicates whether all work is acceptable, and rates the grantee’s performance. Our analysis of the available assessments of 206 grants that had been closed out as of November 1996 indicates that HUD believes that the grantees generally carried out the activities as agreed. HUD rated 21 grantees as excellent, 150 as good, 27 as fair, and 6 as unsatisfactory. For the six grantees rated unsatisfactory, the primary reason cited was a failure to complete all the expected work requirements usually because of personnel changes within the organization. According to HUD, these 206 grants did not represent the total of all grants that should have been closed out and evaluated. An additional 118 grants for which the work has been completed and final payments have been made have yet to be closed out. The Acting FHIP Division Director told us that performing closeout reviews is an administrative process and, as such, is a low-priority item. According to HUD’s Office of Procurement and Contracts, neither federal regulations nor HUD’s guidelines include a specific time frame for completing the reviews. We provided a draft of this report to HUD for review and comment. We discussed the draft report with HUD officials, including the Acting FHIP Division Director. In commenting, HUD said that the report presents an accurate description of how FHIP funds are used. HUD also provided other comments consisting primarily of suggested changes to technical information, and we incorporated these in the report where appropriate. We conducted our work between August 1996 and February 1997 in accordance with generally accepted government auditing standards. Appendix IV describes our objectives, scope, and methodology. We will send copies of this report to congressional committees and subcommittees interested in housing matters; the Secretary of Housing and Urban Development; the Director, Office of Management and Budget; and other interested parties. We will also make copies available to others upon request. If you would like additional information on this report, please call me at (202) 512-7631. Major contributors to this report are listed in appendix V. testing and other investigative activities to identify housing discrimination; remedies for discrimination in real estate markets; special projects, including the development of prototypes to respond to new or sophisticated forms of discrimination; technical assistance to local fair housing organizations; the formation and development of new fair housing organizations; capacity building to investigate housing discrimination complaints for all protected classes; regional enforcement activities to address broader housing discrimination practices; and litigation costs and expenses, including expert witness fees. staff training; education and outreach to promote awareness of services provided by new organizations; technical assistance and mentoring services for new organizations; and activities listed above under the private enforcement initiative. projects that help establish, organize, and build the capacity of fair housing enforcement organizations in targeted unserved and underserved areas of the country. media campaigns, including public service announcements, television, radio and print advertisements, posters, pamphlets and brochures; seminars, conferences, workshops and community presentations; guidance to housing providers on meeting their Fair Housing Act obligations; meetings with housing industry and civic or fair housing groups to identify and correct illegal real estate practices; activities to meet state and local government fair housing planning requirements; and projects related to observance of National Fair Housing Month. fair housing testing programs and other related enforcement activities; systemic discrimination investigations; remedies for discrimination in real estate markets; technical assistance to government agencies administering housing and community development programs concerning applicable fair housing laws and regulations; and computerized complaint processing and the monitoring of system improvements. The following four tables provide details on the Department of Housing and Urban Development’s (HUD) allocation of funds among the Fair Housing Initiatives Program’s (FHIP) four funding initiatives or categories, the dollar amounts made available under each category, and the level of demand for funds under each category. The demand is indicated by both the number of applicants and the dollars requested. Table II.1: HUD-Proposed Allocations, by Initiative and Fiscal Year Not applicable. Not applicable. Not applicable. Not applicable. (continued) Toledo Community Housing Resource Board Housing Opportunities Made Equal of Richmond Metropolitan Milwaukee Fair Housing Council Old Pueblo Community Housing Resource Board Housing for All, Metro Denver Fair Housing Center Fair Housing Council of Greater Washington Caldwell Community Housing Resource Board Leadership Council for Metropolitan Open Communities NAACP-Illinois State Conference of Branches Wyandotte County Community Housing Resource Board Community Housing Resource Board of Lake Charles York County Community Action Corporation Camden County Community Housing Resource Board (continued) City of Tulsa, Department of Human Rights Multnomah County Community Development Division Chattanooga Community Housing Resource Board Virginia Polytechnic Institute and University Metropolitan Milwaukee Fair Housing Council Metropolitan Phoenix Fair Housing Center Fair Housing Council of Greater Washington Housing Opportunities Project for Excellence, Inc. Interfaith Housing Center of the Northern Suburbs Leadership Council for Metropolitan Open Communities Lawyers’ Committee for Civil Rights Under Law of the Boston Bar Association Fair Housing Center of Metropolitan Detroit Fair Housing Council of Northern New Jersey Medger Evers College, Center for Law and Social Justice Monroe County Legal Assistance Corporation Westchester Residential Opportunities, Inc. Metropolitan Fair Housing Council of Greater Oklahoma City (continued) Housing Opportunities Made Equal of Richmond Metropolitan Milwaukee Fair Housing Council Alaska State Commission for Human Rights Arkansas Delta Housing Development Corporation Fair Housing Congress of Southern California Housing for All, Metro Denver Fair Housing Center International Association of Official Human Rights Agencies Fair Housing Council of Greater Washington Neighborhood Federation for Neighborhood Diversity Leadership Council for Metropolitan Open Communities Northern Bergen County Community Housing Resource Board State of South Carolina Human Affairs Commission Fair Housing Congress of Southern California Housing for All, Metro Denver Fair Housing Center Fair Housing Council of Greater Washington (continued) Housing Opportunities Project for Excellence, Inc. Interfaith Housing Center of the Northern Suburbs Chicago Lawyers’ Committee for Civil Rights Under Law, Inc. Leadership Council for Metropolitan Open Communities Lawyers’ Committee for Civil Rights Under Law of the Boston Bar Association Lawyers’ Committee for Civil Rights Under Law of the Boston Bar Association Lawyers’ Committee for Civil Rights Under Law of the Boston Bar Association Fair Housing Center of Metropolitan Detroit Fair Housing Council of Northern New Jersey Westchester Residential Opportunities, Inc. Toledo Community Housing Resource Board Fair Housing Council of Suburban Philadelphia Housing Opportunities Made Equal of Richmond Metropolitan Milwaukee Fair Housing Council Connecticut Commission on Human Rights and Opportunities North Carolina Human Relations Commission King County Office of Civil Rights and Compliance (continued) Fair Housing Council of Greater Washington Interfaith Housing Center of the Northern Suburbs Leadership Council for Metropolitan Open Communities Housing Coalition of the Southern Suburbs Lawyers’ Committee for Civil Rights Under Law of the Boston Bar Association Portland West Neighborhood Planning Council North Carolina State University, Office of Research, Outreach and Extension State of New Jersey, Department of Public Advocacy Housing Consortium for Disabled Individuals Housing Opportunities Made Equal of Richmond Housing for All, Metro Denver Fair Housing Center Fair Housing Council of Greater Washington (continued) Housing Opportunities Project for Excellence, Inc. Leadership Council for Metropolitan Open Communities Fair Housing Center of Metropolitan Detroit Fair Housing Council of Northern New Jersey Westchester Residential Opportunities, Inc. Metropolitan Milwaukee Fair Housing Council King County Office of Civil Rights and Compliance National Association of Protection and Advocacy Systems City of Boston, Boston Fair Housing Commission Metropolitan St. Louis Equal Housing Opportunity Council West Jackson Community Development Corporation State University of New York Research Foundation (continued) Arkansas Delta Housing Development Corporation Chicago Lawyers’ Committee for Civil Rights Under Law, Inc. Lawyers’ Committee for Civil Rights Under Law of the Boston Bar Association Metropolitan St. Louis Equal Housing Opportunity Council Fair Housing Partnership of Greater Pittsburgh Fair Housing Congress of Southern California Lawyers’ Committee for Civil Rights Under Law of the Boston Bar Association Fair Housing Council of Northern New Jersey (continued) Westchester Residential Opportunities, Inc. Catholic Community Services of Southern Arizona (dba the Direct Independent Living Center) Independent Living Resource Center of San Francisco Conference of Mayors, Research and Education Foundation Iowa Citizens for Community Improvement Leadership Council for Metropolitan Open Communities Lawyers’ Committee for Civil Rights Under Law of the Boston Bar Association Legal Aid Bureau of Southwestern Michigan North Carolina State University, Center for Accessible Living Housing Opportunities Made Equal of Richmond (continued) Champlain Valley Office of Economic Opportunity Center for Legal Advocacy (dba the Legal Center Serving Persons With Disabilities) Iowa Protection and Advocacy Services, Inc. Medger Evers College, Center for Law and Social Justice Protection and Advocacy for People With Disabilities North East Wisconsin Fair Housing Council, Inc. Fair Housing Congress of Southern California Housing for All, Metro Denver Fair Housing Center Fair Housing Council of Greater Washington (continued) Housing Opportunities Project for Excellence, Inc. Leadership Council for Metropolitan Open Communities Lawyer’s Committee for Better Housing, Inc. Leadership Council for Metropolitan Open Communities Interfaith Housing Center of the Northern Suburbs New Orleans Legal Assistance Corporation Lawyers’ Committee for Civil Rights Under Law of the Boston Bar Association Fair Housing Center of Metropolitan Detroit Fair Housing Center of Metropolitan Detroit Legal Aid Bureau of Southwestern Michigan Fair Housing Council of Northern New Jersey Housing Opportunities Made Equal Committee of Cincinnati Fair Housing Council of Suburban Philadelphia Housing Opportunities Made Equal of Richmond Housing Opportunities Made Equal of Richmond (continued) Metropolitan Milwaukee Fair Housing Council Metropolitan Milwaukee Fair Housing Council Maryland Commission on Human Relations Rhode Island Commission for Human Rights Washington State Human Rights Commission Iowa Citizens for Community Improvement Leadership Council for Metropolitan Open Communities Mayor’s Office for People With Disabilities West Jackson Community Development Corporation North Carolina State University, Center for Universal Design (continued) State University of New York Research Foundation Westchester Residential Opportunities, Inc. Eugene/Springfield/Cottage Grove (et al.) Community Housing Resources Board Golden Triangle Radio Information Center Tennessee Association of Legal Services, Legal Aid Projects Fair Housing Council of Greater Washington Chicago Lawyers’ Committee for Civil Rights Under Law, Inc. Lawyers’ Committee for Civil Rights Under Law of the Boston Bar Association Fair Housing Partnership of Greater Pittsburgh Housing for All, Metro Denver Fair Housing Center Lawyers’ Committee for Civil Rights Under Law of the Boston Bar Association Housing Opportunities Made Equal Committee of Cincinnati Metropolitan Milwaukee Fair Housing Council (continued) As requested, we reviewed (1) how funds are allocated among the four FHIP initiatives, the dollar amounts made available for each initiative, and the level of demand for funds under each initiative and (2) who receives FHIP funds and how the funds are being used. We are also providing background information, as you requested, on the history of FHIP and activities that can be funded under the program. To obtain information on FHIP, its funding, and eligible activities, we reviewed the program’s legislative history, regulations, policies, procedures, and Federal Register notices that solicited applications from eligible fair housing agencies and organizations. We also reviewed HUD’s annual reports to the Congress on fair housing programs for 1993 and 1994 and obtained descriptions and budgets for other HUD-administered fair housing activities. We interviewed the Director, Office of Fair Housing Initiatives and Voluntary Programs (who also is the Acting FHIP Division Director); FHIP’s government technical representatives; the Deputy Assistant Secretary, Enforcement and Investigations; and the Director, Office of Investigations, Fair Housing and Equal Opportunity. We also interviewed FHIP officials at the HUD’s Southwest and Midwest Regions in Fort Worth, Texas and Chicago, Illinois, respectively, as well as officials of six organizations that received FHIP grants. In addition, we held discussions with the National Association of Realtors and the Mortgage Bankers Association and attended the 1996 New England and Mid-Atlantic Fair Housing Conference. To determine how HUD allocates funds among the four program initiatives, we reviewed and analyzed FHIP congressional budget justifications for fiscal years 1989 to 1997. We also reviewed memorandums and correspondence regarding funding allocations and HUD’s priorities for FHIP since its inception. To determine the amounts available for award, we reviewed FHIP’s notices of funding availability as published in the Federal Register for fiscal years 1989 through 1996. To determine the demand for funds, we reviewed and analyzed the available selection results, including technical evaluation panels’ reports, which contained lists of grant applicants and the panels’ recommendations to the Assistant Secretary for Fair Housing and Equal Opportunity. We reviewed technical evaluation reports to compile data on the number of applications by fiscal year and by program initiative. We also analyzed the dollar value of applications for those years for which complete information was readily available—fiscal years 1994 to 1996. Additionally, we reviewed program guidance on the selection process and interviewed HUD government technical representatives involved in the selection process. To identify the recipients of FHIP funds and the amount of dollars received, we obtained a copy of the FHIP funding and contract tracking system’s database, which contained 486 grant listings as of October 1996. Many grant numbers were not accompanied by the grantee organizations’ names and locations. To develop a more complete list, we compared the listed grant numbers to other HUD-provided reports and added names and locations to the database where possible. We used this database as a control for our review of the FHIP grant files. During our review of the files, we filled in the missing names and locations and verified all other grantees’ names and locations, as well as the grant amounts and year of appropriation. To determine how FHIP dollars are being used, we developed a data collection instrument to record data from grant files on the activities organizations agreed to carry out under the program. In developing the instrument, we interviewed program officials, reviewed FHIP legislation and regulations, notices of funding availability, and a sample of FHIP grant files. HUD program officials reviewed and commented on the data collection instrument, and we incorporated their suggested changes. For grants awarded through fiscal year 1996, we reviewed the available grant files (483) and recorded on the data collection instrument the activities each grantee agreed to carry out. We used the information to develop a database from which we analyzed the number and dollar value of the grants awarded to organizations and the kinds of activities funded under each FHIP initiative. We also reviewed the available final performance assessments (206) to determine whether grantees completed work as agreed and how HUD rated their overall performance. We did not independently verify the accuracy of the final performance assessments. In addition, we interviewed HUD Inspector General officials in each HUD region regarding their reviews of FHIP grantees. To determine whether any grantees have used FHIP funds to pay expenses associated with litigation against the U.S. government, we interviewed officials in HUD’s Office of General Counsel, namely, the Assistant General Counsel, Fair Housing Enforcement Division, and Managing Attorney, Litigation Division. At our request, HUD’s General Counsel contacted agency attorneys in each region to determine whether they had knowledge of any lawsuits filed by FHIP grantees against the government. We interviewed the Acting FHIP Division Director, responsible government technical representatives, and government technical monitors about their knowledge of the cases identified. We also reviewed correspondence from grantees concerning whether FHIP funds were used to pursue litigation. Patricia D. Moore Jeannie B. Davis Michael L. Mgebroff Vondalee R. Hunt Alice G. Feldesman John T. McGrail The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Department of Housing and Urban Development's Fair Housing Initiatives Program, focusing on: (1) how funds are allocated among the program's four initiatives or funding categories, what dollar amounts are made available under each category, and what level of demand exists for funds under each category; and (2) who receives program funds and how the funds are being used. GAO noted that: (1) from the program's inception through fiscal year (FY) 1997, the Congress has appropriated $113 million to carry out the Fair Housing Initiatives Program; (2) the Assistant Secretary for Fair Housing and Equal Opportunity, the Department of Housing and Urban Development, judgmentally determines how funds are allocated among the four initiatives on the basis of the program legislation, the administration's and the agency's priorities, and input from the housing industry and fair housing groups; (3) the agency's budget requests to the Congress set forth how it plans to divide the total program dollars among the four initiatives; (4) the largest portion, more than $40 million, has been budgeted and made available for the private enforcement initiative; (5) as measured by the amounts requested on applications, for the 3 most recent years, fiscal years 1994 through 1996, there is also great demand for the private enforcement initiative; (6) through FY 1996, 220 different organizations in 44 states and the District of Columbia received program grants; (7) of all the funds awarded, 26 organizations received about half; (8) the largest portion of funds, about $41 million, was spent on the private enforcement initiative for activities aimed at determining the existence of discrimination in renting, sales, and lending, primarily testing to investigate individual complaints and testing to investigate industry practices; (9) grantees have used funds for a variety of other fair housing activities, such as litigation, new fair housing organizations and capacity building for existing organizations, pamphlets and brochures, print, television, and radio advertisements, and conferences and seminars for housing industry professionals; and (10) other funded activities also have included special projects on mortgage lending and insurance redlining.
The AFWCF relies on sales revenue rather than regular appropriations to finance its continuing operations. The AFWCF is intended to (1) generate sufficient resources to cover the full cost of its operations and (2) operate on a break-even basis over time—that is, neither make a gain nor incur a loss. Customers primarily use appropriated funds to finance orders placed with the AFWCF. Cash generated from the sale of goods and services is the AFWCF’s primary means of maintaining an adequate level of cash to sustain its operations. The ability to generate cash consistent with DOD’s regulations depends on (1) accurately projecting workload, such as the number of aircraft, engines, missiles, and components needed to be repaired during the year or annual transportation requirements needed to move United States forces, equipment, and supplies around the globe, and (2) accurately setting prices to recover the full costs of producing goods and services. DOD policy requires the AFWCF to establish its sales prices prior to the start of each fiscal year and to apply these predetermined or “stabilized” prices to most orders received during the year—regardless of when the work is accomplished or what costs are incurred. Stabilized prices provide customers with protection during the year of execution from prices greater than those assumed in the budget and permits customers to execute their programs as approved by Congress. Developing accurate prices is challenging because the process to determine the prices begins about 2 years in advance of when the work is actually received and performed. In essence, the AFWCF’s budget development has to coincide with the development of its customers’ budgets so that they both use the same set of assumptions. To develop prices, the AFWCF estimates (1) labor, material, overhead, and other costs based on anticipated demand for work as projected by customers; (2) total direct labor hours for each type of work performed, such as work related to aircraft, engines, and repairable inventory items; (3) the workforce’s productivity; and (4) savings because of productivity and other cost avoidance initiatives. In order for the AFWCF to operate on a break-even basis, it is extremely important that the AFWCF accurately estimate the work it will perform and the costs of performing the work. Higher-than-expected costs or lower-than-expected customer demand for goods and services can cause the working capital fund to incur losses. Conversely, lower-than-expected costs or higher-than-expected customer demand for goods and services can result in profits. With sales prices based on assumptions that are made as long as 2 years before the prices go into effect, some variance between expected and actual costs is inevitable. If projections of cash disbursements and collections indicate that cash balances will drop below the minimum cash requirement, the AFWCF may need to generate additional cash. One method that may be used is to bill customers in advance for work not yet performed. Advance billing generates cash almost immediately by billing AFWCF customers for work that has not been completed. This method is a temporary solution and is only used when cash reaches critically low balances because it requires manual intervention in the normal billing and payment processes. During fiscal year 2013, the AFWCF earned $21.2 billion in revenue. The AFWCF consists of three business entities: the Consolidated Sustainment Activity Group (CSAG), the Supply Management Activity Group-Retail (SMAG-R), and the Transportation Working Capital Fund (TWCF). The Air Force manages the CSAG and the SMAG-R and acts as an executive agent for the TWCF. The Air Force assumed responsibility for TWCF cash in fiscal year 1998 and TWCF cash is included in the AFWCF cash balance. However, USTRANSCOM rather than the Air Force has the day-to-day management responsibility for TWCF operations. The following is a description of the three business entities. CSAG: During fiscal year 2013, the CSAG earned $7.4 billion in revenue. The CSAG provides repairable supply items and consumable supply items as well as maintenance services. The Air Force operates two CSAG divisions: the supply division and the maintenance division. The supply division is primarily responsible for managing repairable and consumable spare parts unique to the Air Force. The supply division issued about 1.7 million repairable or consumable spare parts in fiscal year 2013. The maintenance division is responsible for economically repairing, overhauling, and modifying aircraft, engines, missiles, and components to meet customer demands. The Air Force operated three air logistics complexes performing about 21 million direct labor hours of work in fiscal year 2013. SMAG-R: During fiscal year 2013, the SMAG-R earned $3.3 billion in revenue. The Air Force’s SMAG-R manages inventory items, including weapon system spare parts, medical-dental supplies and equipment, and other supply items used in non-weapon system applications. It also procures material and makes spare parts available to authorized customers. The SMAG-R comprises three divisions: the General Support Division, the Medical-Dental Division, and the United States Air Force Academy Division. The General Support Division manages nearly 1.4 million items procured from the Defense Logistics Agency and the General Services Administration to support field and depot maintenance of aircraft, ground and airborne communication, and electronic systems. The Medical-Dental Division manages items for 74 medical treatment facilities worldwide. Finally, the United States Air Force Academy Division purchases uniforms and uniform accessories for sale to approximately 4,000 cadets at the Air Force Academy. TWCF: During fiscal year 2013, the TWCF earned $10.5 billion in revenue. USTRANSCOM’s mission is to provide air, land, and sea transportation for DOD in times of peace and war, with a primary focus on wartime readiness. USTRANSCOM submits the TWCF budget as a distinct subset of the AFWCF budget submission. It reflects the authority needed to meet peacetime operations, overseas contingency operations, the surge/readiness requirements to support military strategy, and other priorities needed to meet its transportation mission. According to the USTRANSCOM fiscal year 2012 annual report, the airlift component of USTRANSCOM flew almost 85,000 sorties supporting 31,181 missions around the world and transported over 1.8 million passengers and 659,000 short tons of cargo to their destinations in fiscal year 2012. Further, the sealift and surface movement components of USTRANSCOM moved more than 500,000 measurement tons of cargo (sea transportation) and 14.5 million square feet of cargo (surface transportation) in support of U.S. forces worldwide in fiscal year 2012. Effective cash management in DOD largely depends on managers receiving accurate and timely data on cash balances, collections, and disbursements. Currently, DOD cash balances are visible only in official reports at the end of each month. According to DOD’s Financial Management Regulation, volume 2B, chapter 9, DOD working capital funds are to maintain the minimum cash balance necessary to meet disbursement requirements in support of both operations and the capital asset program. The DOD working capital funds are to maintain a minimum cash balance sufficient to pay bills, such as (1) paying employees’ salaries for repairing aircraft, weapon systems, and equipment; (2) purchasing inventory items (spare parts) from vendors; and (3) transporting troops, equipment, and supplies worldwide. DOD’s Financial Management Regulation requires that “cash levels should be maintained at 7 to 10 days of operational cost and cash adequate to meet six months of capital disbursements.” Thus, the minimum cash requirement consists of cash that is sufficient to meet 6 months of capital requirements plus 7 days of operational cost. The maximum cash requirement consists of 6 months of capital requirements plus 10 days of operational cost. The regulation further provides that a goal of DOD working capital funds is to minimize the use of advance billing of customers to maintain cash solvency unless advance billing is required to avoid Antideficiency Act violations. In June 2010, the DOD Financial Management Regulation was amended to allow DOD working capital fund activities, with the approval of the Office of the Under Secretary of Defense (Comptroller), Director of Revolving Funds, to incorporate into the formula for calculating the minimum and maximum cash requirements three new adjustments. These adjustments would increase the minimum and maximum cash requirements. First, a working capital fund may increase the minimum and maximum cash requirements for the amount of accumulated operating results planned for return to customer accounts. The working capital fund returns accumulated profits back to its customers by reducing future prices so it can operate on a break-even basis over time. The second adjustment allowed by the revised DOD Financial Management Regulation is an allowance for funds appropriated directly to a working capital fund that are obligated in the year received but not fully spent until future years. The adjustment allows the working capital fund to retain these amounts as an addition to its normal operational costs. Finally, a working capital fund may increase the minimum and maximum cash requirements by the marginal cash required to purchase goods and services from the commodity/business market at a higher price than was submitted in the President’s Budget. The adjustment reflects the cash impact of the specified market fluctuation. The AFWCF’s monthly cash balances fell within the minimum and maximum cash requirements about one-third of the time during fiscal years 2009 through 2013. Our analysis of AFWCF cash data showed that the AFWCF monthly cash balances fluctuated significantly from fiscal years 2009 through 2013 but were almost equally distributed above, between, and below the minimum and maximum cash requirements. The AFWCF monthly cash balances were above the maximum cash requirement for 19 of the 60 months, between the minimum and maximum cash requirements for 21 of the 60 months, and below the minimum cash requirement for 20 of the 60 months. Figure 1 shows the AFWCF monthly cash balances compared to the minimum and maximum cash requirements for fiscal years 2009 through 2013. Our analysis of the AFWCF monthly cash balances showed that the average monthly cash balance declined each year from fiscal year 2009 through fiscal year 2013. Further, as monthly cash balances declined (1) more months were above the maximum cash requirement in fiscal years 2009 and 2010 compared to the next 3 fiscal years and (2) more months were below the minimum cash requirement in fiscal years 2012 and 2013 compared to the prior years. Table 1 shows AFWCF monthly cash balance information for each of the 5 fiscal years reviewed and the number of months the AFWCF cash balances were above, between, or below the minimum and maximum cash requirements. Our analysis of AFWCF financial documents and discussions with Air Force headquarters, Air Force Materiel Command, and USTRANSCOM officials provided the following information on the AFWCF monthly cash balances and the relationship of the monthly cash balances to the minimum and maximum cash requirements from fiscal years 2009 through 2013. First, the monthly cash balances were generally high in fiscal years 2009 and 2010 because the Air Force charged more than it cost for spare parts. Second, the monthly cash balances fluctuated because of the cyclical nature of events, such as the DOD operating under a continuing resolution. Finally, large-dollar transactions, such as transfers in and out of the AFWCF, affected the monthly cash balances. These factors affecting the AFWCF cash balance are discussed further in the sections that follow. The AFWCF entered fiscal year 2009 with a cash balance of $1,384 million—about $224 million or 19 percent above the maximum cash requirement for fiscal year 2009. Air Force headquarters officials informed us that when they set the rates to be charged to CSAG and SMAG-R customers for supply items in fiscal years 2009 and 2010, they wanted the cash balance to be at the maximum cash requirement because the AFWCF changed the method for charging customers for supply items. Specifically, the Air Force changed the method for charging customers for spare parts from (1) selling individual parts to customers on a transaction- by-transaction basis to (2) charging customers for spare parts based on actual hours flown by aircraft. Because the Air Force did not have historical data on applying the new method, the Air Force wanted to make sure it charged customers enough for supply items so that the AFWCF would have a sufficient cash balance. In doing so, the Air Force charged more than it cost for spare parts, which resulted in a higher-than-expected cash balance. According to Air Force headquarters and Air Force Materiel Command officials and our analysis of financial documents, the monthly cash balance was above the maximum amount for 14 of the 24 months during fiscal years 2009 and 2010. Air Force headquarters, Air Force Materiel Command, and USTRANSCOM officials informed us that fluctuations in the AFWCF monthly cash balances occurred because of the cyclical nature of events that affect the AFWCF. Customer orders do not execute in a smooth pattern throughout the fiscal year but instead fluctuate on a seasonal basis as discussed below. These seasonal fluctuations can result in cash balances falling below the minimum cash requirement early in the fiscal year and rising above the maximum in the second half of the fiscal year. In the beginning of the fiscal year, the monthly cash balances generally decrease if DOD operates under a continuing resolution because AFWCF customers’ funding is constrained, which, in turn, suppresses customer demand for AFWCF goods and services. For fiscal years 2009 through 2013, except for fiscal year 2009, DOD operated under a continuing resolution. For example, our analysis of fiscal year 2012 financial reports showed that for November 2011, December 2011, and January 2012, the AFWCF monthly cash balances were below the minimum cash requirement. In another case, our analysis of fiscal year 2010 financial reports showed that for the first 3 months of fiscal year 2010, the AFWCF monthly cash balances were lower than the ending cash balance for fiscal year 2009. The October and December 2009 cash balances were below the minimum cash requirement. Beginning in the April and May time frame and continuing through the summer months, the AFWCF monthly cash balances generally increase because the Air Force flies more training missions during these months because of the better weather conditions. The additional hours flown results in the CSAG and SMAG-R earning more revenue, which, in turn, increases AFWCF collections and its monthly cash balance. For example, our analysis of financial reports showed that in fiscal year 2009, the three highest AFWCF monthly cash balances were in June, July, and August, as shown in figure 1. For these 3 months, the monthly cash balances were above the maximum cash requirement by over $500 million. In another case, our analysis of financial reports showed that in fiscal year 2011, the two highest AFWCF monthly cash balances were in July and August, as shown in figure 1. For these 2 months, the monthly cash balances (1) were the highest cash balances for the fiscal year and (2) were the only two months when the monthly cash balance was above the maximum cash requirement. Air Force headquarters, Air Force Materiel Command, and USTRANSCOM officials stated that fluctuations in the AFWCF monthly cash balances also occurred because of large-dollar transactions, such as appropriations received by the AFWCF or large transfers to and from the AFWCF. These fluctuations occurred in the month that a transaction was made and resulted in large fluctuations in the cash balance from month to month as well as the monthly cash balances fluctuating from below the minimum cash requirement to above the maximum cash requirement or vice versa. For example, at the end of August 2010, the AFWCF monthly cash balance was $1,395 million (above the maximum cash requirement). During August 2010, the AFWCF received an $847 million appropriation to fund fuel increases. If the AFWCF had not received this appropriation, the August 2010 monthly cash balance would have been $548 million (below the minimum cash requirement). On the other hand, the AFWCF cash balance fluctuated downward and was reduced when transfers were made from the AFWCF to other appropriations. For example, the Air Force transferred $251 million out of the AFWCF in the last quarter of fiscal year 2009. These transfers ($105 million in July 2009 and $146 million in September 2009) reduced the amount of cash that was over the maximum cash requirement. Specifically, the September 2009 AFWCF cash balance was over the maximum cash requirement by about $249 million. If the $251 million had not been transferred, the AFWCF fiscal year 2009 ending cash balance would have been over the maximum cash requirement by about $500 million. Further, large transactions can affect the monthly cash balances for several months and thus affect the AFWCF’s ability to fall within the minimum and maximum cash requirements. In developing the fiscal year 2013 AFWCF budget, the Air Force set its fiscal year 2013 rates to return prior year gains to its customers. Air Force officials informed us that the fiscal year 2013 rates were expected to lower the AFWCF cash balance by about $500 million and the projected cash balance would be close to the minimum cash requirement. However, the cash balance fell below the minimum cash requirement because of an unplanned $370 million transfer from the AFWCF made in August 2012. The transfer was needed to fund overseas contingency operations requirements, such as aircraft depot maintenance. From September 2012 through May 2013, the monthly cash balance was below the minimum cash requirement for 8 of the 9 months. If the $370 million transfer had not occurred, the monthly cash balance would have been above the maximum cash requirement for 4 months, between the minimum and maximum cash requirements for 3 months, and below the minimum cash requirement for 2 months. From June 2013 through September 2013, the monthly cash balance was between or above the cash requirement. For further details on the cash balances for fiscal years 2009 through 2013, see appendix II. The AFWCF’s fiscal year 2015 budget information shows that the fund’s projected monthly cash balances are only expected to fall within the minimum and maximum cash requirements about 25 percent of the time in fiscal years 2014 and 2015. Specifically, our analysis of the AFWCF’s fiscal year 2015 budgeted information that contains its cash management plans for fiscal years 2014 and 2015 showed that the projected monthly cash balances are expected to be above the projected maximum cash requirement for the first 17 months, between the projected minimum and maximum cash requirements for the next 6 months, and below the projected minimum cash requirement for the final month. Figure 2 shows the AFWCF projected monthly cash balances compared to the projected minimum and maximum cash requirements for fiscal years 2014 and 2015. Our analysis of AFWCF financial documents and interviews with AFWCF headquarters officials identified two reasons for the fiscal year 2014 projected AFWCF cash balances being above the projected maximum cash requirement: (1) the AFWCF entered fiscal year 2014 with a cash balance ($1,458 million) that was above the projected maximum cash requirement for fiscal year 2014 by about $260 million or 22 percent and (2) the AFWCF is building or maintaining a higher cash balance in preparation for the October 1, 2014, implementation of a Treasury initiative to provide visibility over daily cash balances for all appropriations, including the AFWCF. The higher cash balance is needed to cover the volatility in the daily cash balance. By the final month of fiscal year 2015, the projected AFWCF cash balance is expected to decrease below the projected minimum cash requirement. The projected decline occurred because of an expected shortfall of $927 million in the Airlift Readiness Account (ARA) resulting from the Air Force including only a portion ($150 million) of the total projected fiscal year 2015 ARA requirement ($1,077 million) in the fiscal year 2015 Air Force operation and maintenance appropriated budget request. Further, achieving the AFWCF projected cash balances in fiscal year 2015 depends on the Air Force and USTRANSCOM successfully implementing cost reduction and efficiency initiatives to save $620 million during that fiscal year. Treasury is modernizing and streamlining its reporting processes through its government-wide accounting initiative. One result of this initiative will be for Treasury to provide daily cash balances for all appropriations, including the AFWCF, beginning in fiscal year 2015. Currently, the cash balance is visible only in official reports at the end of each month. In preparation for DOD’s implementation of Treasury’s initiative, the Office of the Under Secretary of Defense (Comptroller) initiated a study in February 2013 to evaluate the impact daily cash reporting will have on working capital fund policies and management. The goal of the study was to assist working capital fund managers to identify necessary modifications to current controls, processes, and policies needed to prevent potential Antideficiency Act violations. As part of the study, the Air Force and USTRANSCOM collected daily disbursement and collection data from their systems to determine the impact these transactions had on their daily cash balances. In analyzing these data, they noted the following. The AFWCF’s day-to-day cash balances are more volatile than cash balances measured on a monthly basis and can fluctuate by hundreds of millions of dollars on a given day. These fluctuations increase the risk that the cash balance could become negative on a particular day. DOD systems generally bill customers once or twice a month. The billing systems run in the middle of the month and at the end of the month. Since disbursements are made daily, the daily cash balance is generally at its lowest level at about the middle of the month before the first billing cycle occurs for that month. Air Force and USTRANSCOM officials informed us that managing cash daily rather than monthly will require them to take two actions. First, these officials stated that the AFWCF will need to maintain a higher cash balance to offset the day-to-day volatility of cash during the month. The higher cash balance will reduce the risk that (1) the AFWCF daily cash balance will become negative on a particular day during the month and (2) the AFWCF will incur a potential Antideficiency Act violation. However, the Air Force and USTRANSCOM have not determined how much the minimum and maximum cash requirements should be increased to cover the additional cash needed. Second, the Air Force is considering adding another billing cycle for the work performed by depot maintenance to increase collections during the early part of the month and offset disbursements made during the same time period. The implementation of Treasury’s initiative to provide daily cash balances was planned for October 1, 2014 (beginning of fiscal year 2015). However, according to Office of the Under Secretary of Defense (Comptroller) officials, DOD systems will not be ready to provide the necessary data to meet the implementation date because (1) DOD has multiple disbursing systems and nonintegrated disbursing locations and (2) DOD has a requirement to protect classified or sensitive information. As of May 2014, the officials stated that they have not established a new implementation date. See DOD Financial Management Regulation 7000.14-R, Accounting for Cash and Fund Balances with Treasury, vol. 4, ch. 2, p. 2-14 (December 2009). monthly basis, our analysis showed that this policy will need to be updated to recognize that DOD has visibility over daily cash balances, especially if the cash balances become negative during the month. For example, if the cash balance becomes negative on a particular day during the month, a reconciliation will need to be performed to determine if the cash balance is actually negative or if an error had occurred in determining the cash balance. While DOD realizes that the Financial Management Regulation needs to be revised to effectively implement Treasury’s initiative to provide daily cash balances, DOD has not revised it to include guidance on (1) Treasury’s initiative to provide daily cash balances instead of monthly cash balances, (2) maintaining sufficient cash balances on a daily basis to avoid potential Antideficiency Act violations, and (3) the reconciliation of daily cash balances to ensure the integrity and accuracy of the data. Further, DOD has not determined the full impact of Treasury’s initiative to provide daily cash balances for the AFWCF or developed an analytical approach for calculating the minimum and maximum cash requirements to adjust for the day-to-day volatility of cash balances in the AFWCF and reduce the risk that the AFWCF may incur an Antideficiency Act violation. Once the updated cash requirement is determined, this will affect whether the projected AFWCF cash balances for fiscal year 2015 will be above, between, or below the cash requirement. Internal control standards state that policies, procedures, techniques, and mechanisms are needed to enforce management’s directives, such as the process of adhering to requirements for budget development and execution. They help ensure that actions are taken to address risks.Financial Management Regulation does not reflect the effect that daily cash balances will have on the AFWCF and thereby increases the risk of a cash shortage and an Antideficiency Act violation. The DOD Financial Management Regulation contains a provision that pertains to the airlift services provided by USTRANSCOM and that affects the TWCF as well as the AFWCF cash balances. To enable it to better compete with commercial providers, USTRANSCOM’s airlift rates are set to compete with private sector rates and do not cover the full cost of the Air Force’s readiness requirements for military airlift operations. The difference between the full cost and the revenue received from airlift customers is to be provided to USTRANSCOM in the ARA through the use of Air Force appropriated funds. This requirement exists in both peacetime and contingency environments. USTRANSCOM officials stated that since they set rates that are benchmarked to private sector rates, USTRANSCOM does not recover its full cost of operations and the TWCF incurs losses that in turn result in a lower cash balance. To recover these losses, the TWCF identifies the peacetime and contingency airlift requirements and the customers include these amounts in their budget requests. During the year of execution, the TWCF bills the services once a month. Specifically, the TWCF receives funding from (1) the Air Force to pay for the peacetime airlift requirement (Air Force Operation and Maintenance appropriation) and (2) the military services for the contingency airlift requirement (overseas contingency operations appropriation). Over the last several years, the TWCF has received hundreds of millions of dollars that has helped it maintain cash solvency. According to the TWCF budget request, USTRANSCOM estimates the fiscal year 2014 ARA funding requirement to be $150 million plus $691 million for the contingency airlift requirement, for a total of $841 million. However, in the fiscal year 2015 TWCF budget, USTRANSCOM estimates the fiscal year 2015 ARA funding requirement to be $1,077 million, while the fiscal year 2015 Air Force Operation and Maintenance budget provides for $150 million for the ARA. This estimated shortfall of $927 million could negatively affect the TWCF as well as the AFWCF cash balances. Maintaining sufficient cash balances in the TWCF and AFWCF while minimizing the overall ARA bill to the Air Force presents major management challenges for Air Force and USTRANSCOM. If the ARA is overfunded, the Air Force may not be using its resources efficiently to fund other Air Force program priorities to meet mission requirements. Alternatively, if the ARA is underfunded, reductions to Air Force programs may be required to maintain adequate levels of cash in the AFWCF. The Air Force and USTRANSCOM have had a history of problems with the Air Force requesting full funding in the Air Force’s budget for the USTRANSCOM ARA requirement. For example, according to the TWCF budget, USTRANSCOM included a requirement for $294 million to fund the ARA in fiscal year 2013, but the Air Force did not include funding for the ARA in the fiscal year 2013 Air Force operation and maintenance budget request. Though the Air Force did not request funding for the ARA in the fiscal year 2013 budget, the Air Force paid its fiscal year 2013 ARA bill. In reviewing past AFWCF budgets, the Office of the Under Secretary of Defense (Comptroller) has also been concerned with ARA funding issues. To address its concerns on these funding issues, in February 2012, the Office of the Under Secretary of Defense (Comptroller) directed (1) the Air Force to ensure that both the AFWCF and TWCF cash levels are adequate to support operational and mobilization requirements in fiscal years 2013 and subsequent years; (2) the Air Force and USTRANSCOM to determine the appropriate methodology for fully funding USTRANSCOM, including the ARA, and estimating the funding sources for fiscal year 2014; and (3) the Air Force to ensure that it funds its responsibilities for USTRANSCOM, including the ARA, updated for more current workload assumptions, in the fiscal year 2014 budget and all future budgets. Although the Office of the Under Secretary of Defense (Comptroller) has previously raised concerns about ARA funding issues, the funding issues are projected to continue into fiscal year 2015. As stated above, the estimated funding shortfall of $927 million could negatively affect the TWCF and AFWCF cash balances in fiscal year 2015. Air Force officials stated that although the AFWCF projected cash balance is expected to be above the projected maximum cash requirement at the beginning of fiscal year 2015, they expect the cash balance to decrease to $853 million and below the projected minimum cash requirement by fiscal year-end. This decline is expected to occur because Air Force did not fully fund the estimated ARA requirement in the fiscal year 2015 budget. Air Force officials informed us that it did not fully fund the ARA requirement because it funded other higher-priority requirements. Air Force officials are aware of the estimated funding shortfall and stated that the ARA funding requirement will be best managed in fiscal year 2015 when they can better estimate the actual ARA funding requirement. Air Force officials stated that Air Force and USTRANSCOM senior leadership currently meet monthly to collaborate on AFWCF cash challenges and will be evaluating the ARA funding during fiscal year 2015—the year of execution. They stated that workload and cash levels are monitored closely during the year of execution and that any potential issues involving workload and cash levels, including ARA funding, would be identified several months in advance and brought to senior leadership’s attention. If a shortfall in the AFWCF cash balance begins to materialize in the year of execution because of the unfunded ARA requirement, Air Force officials stated that they have several options, including (1) reviewing ongoing programs funded by the Air Force operation and maintenance appropriation accounts to identify where requirements have changed that would allow the AFWCF to obtain additional funds for the ARA or (2) reviewing their investment accounts for excess funding to transfer to the AFWCF. Air Force officials believe any potential cash shortfall would be identified in sufficient time to take action. While USTRANSCOM officials understand the Air Force’s position on funding the ARA, USTRANSCOM officials informed us that they believe it is more prudent for the Air Force to request a significant amount of funding for the ARA in the Air Force budget. These officials further stated that if a funding shortfall occurs, the Air Force faces the risk of transferring large amounts in the year of execution to cover the unfunded requirement. Because the Air Force plans to address potential ARA cash shortfalls in the year of execution, the AFWCF projected cash balances are currently expected to decline during fiscal year 2015 and be below the minimum cash requirement by the end of fiscal year 2015, which could impair the ability of the AFWCF to maintain adequate cash balances. The ARA funding issue could affect the Air Force’s efforts to build cash in anticipation of the Treasury initiative to provide daily cash balances. Because daily cash balances are more volatile, the lower projected cash balance caused by the lack of funding for the fiscal year 2015 ARA requirement increases the risk of a cash shortfall and the risk of a potential Antideficiency Act violation. As stated previously, our analysis of fiscal year 2015 AFWCF budget information that contains the AFWCF cash management plans for fiscal years 2014 and 2015 showed that the projected monthly cash balances are expected to be above the maximum cash requirement in fiscal year 2014 and decline in fiscal year 2015 and fall below the minimum cash requirement in September 2015. According to Air Force and USTRANSCOM officials, achieving the AFWCF projected monthly cash balances for fiscal years 2014 and 2015 depends on the Air Force and USTRANSCOM successfully implementing cost reduction and efficiency initiatives for fiscal years 2014 and 2015. Specifically, the savings from these initiatives were a factor used to (1) set prices to charge customers in fiscal years 2014 and 2015 and (2) estimate projected disbursements. Air Force and USTRANSCOM documentation shows that the fiscal year 2015 AFWCF budget includes $114 million and $620 million in budgeted savings for fiscal years 2014 and 2015, respectively. For fiscal year 2014, the $114 million in budgeted savings represents less than 10 percent of the projected cash balance for any month during fiscal year 2014. On the other hand, $620 million in budgeted savings represents about 73 percent of the $853 million projected cash balance at the end of September 2015—the lowest projected balance in fiscal year 2015. If the budgeted savings are not achieved, the AFWCF cash balance would be adversely affected because disbursements would be higher than expected, which would reduce the AFWCF cash balance below that already projected. The Air Force and USTRANSCOM have cost reduction and efficiency initiatives that are designed to reduce costs and related disbursements for fiscal years 2014 and 2015, as discussed below. The fiscal year 2015 AFWCF budget includes $503 million in fiscal year 2015 budgeted savings associated with Air Force initiatives. Air Force initiatives to reduce costs and related disbursements include (1) reducing the Air Force workforce at CSAG locations by up to 2,000 personnel, (2) refining forecasting models to more accurately reflect future parts requirements, and (3) reducing government travel and contracts where appropriate. The fiscal year 2015 AFWCF budget includes budgeted savings of $114 million and $117 million for fiscal years 2014 and 2015, respectively, for USTRANSCOM initiatives. USTRANSCOM initiatives to reduce costs and related disbursements include (1) reducing management overhead costs by 20 percent by fiscal year 2019, (2) reducing spending on information technology projects, and (3) transferring base operating support costs to Air Force operations and maintenance. Since the $620 million in savings initiatives represents 73 percent of the September 2015 projected cash balance, it is critical that the Air Force and USTRANSCOM achieve these savings. If these savings are not realized, it can put a further strain on the AFWCF cash balance, which is already projected to be under the minimum cash requirement at the end of fiscal year 2015. In managing the AFWCF projected cash balance in fiscal year 2015, there are several factors that may affect these balances and increase the risk of an Antideficiency Act violation, including (1) the Treasury cash initiative that will require additional cash to cover the volatility of cash on a day-to-day basis, (2) the lack of funding for the fiscal year 2015 ARA requirement, and (3) initiatives to achieve savings. If a cash shortfall occurs, regardless of the reason, the Air Force will have to determine how to fund the shortfall. The work that the AFWCF performs supports military readiness by repairing aircraft and engines; selling inventory items (spare parts); and providing air, land, and sea transportation for DOD in times of peace and war. Maintaining the AFWCF cash balance within the minimum and maximum cash requirements as defined by DOD regulation is critical for the Air Force and USTRANSCOM to continue to provide maintenance, supply, and transportation services for their fund’s customers. Over the past 5 years, the AFWCF has managed to maintain a sufficient cash balance to pay its bills and sustain operations without disruption even though the cash balances were outside the minimum and maximum cash requirements about two-thirds the time. While the projected cash balances are expected to be above the maximum cash requirement for fiscal year 2014 and the first half of fiscal year 2015, Air Force officials face challenges in managing the AFWCF cash in fiscal year 2015. The challenges include Treasury’s initiative to provide visibility over daily cash balances for all appropriations, including the AFWCF; funding ARA requirements; and meeting established savings goals for AFWCF cost reduction and efficiency initiatives. First, the Air Force has not determined the appropriate minimum and maximum cash requirements for the AFWCF that will be needed to cover the volatility of daily cash balances that can fluctuate by hundreds of millions of dollars each day and reduce the risk that the AFWCF will incur an Antideficiency Act violation, nor has DOD updated its policies to reflect daily reporting of working capital fund cash balances versus monthly cash balances. Second, the Air Force did not fully fund the ARA requirement in its operations and maintenance budget for fiscal year 2015, which increases the risk of a cash shortfall in fiscal year 2015. Third, it is critical that the Air Force and USTRANSCOM achieve the estimated $620 million savings in fiscal year 2015 as this represents a significant amount of the fiscal year 2015 year-end cash balance. We are making three recommendations to the Secretary of Defense to improve the management of the AFWCF’s cash balances. We recommend that the Secretary of Defense direct the Under Secretary of Defense (Comptroller) to take the following action: Update the DOD Financial Management Regulation to include guidance on (1) maintaining sufficient cash balances on a daily basis to avoid potential Antideficiency Act violations and (2) the reconciliation of daily cash balances to ensure the integrity and accuracy of the data once DOD implements Treasury’s initiative. We recommend that the Secretary of Defense direct the Secretary of the Air Force, in conjunction with the Under Secretary of Defense (Comptroller), to take the following action: Develop an analytical approach for calculating the minimum and maximum cash requirements to take into consideration the additional cash needed to cover the day-to-day volatility in the cash balances once DOD implements Treasury’s initiative. We recommend that the Secretary of Defense direct the Secretary of the Air Force and the Commander of USTRANSCOM to take the following action: Take steps to help ensure that the AFWCF receives the appropriate funding if a cash shortfall occurs because of (1) the implementation of the daily cash requirement, (2) a lack of fiscal year 2015 ARA funding, and (3) fiscal year 2015 budgeted savings not being realized. DOD provided written comments on a draft of this report. In its comments, which are reprinted in appendix III, DOD concurred with the three recommendations and cited actions planned or under way to address them. Specifically, DOD commented that consistent with forthcoming instructions from Treasury regarding daily cash balances, the Under Secretary of Defense (Comptroller) will revise the Financial Management Regulation on or about the implementation date for daily cash reporting. In addition, DOD stated that while the Financial Management Regulation is in draft, guidance consistent with this recommendation will be published for component submissions of each Program and Budget Review and President’s Budget. Further, DOD stated that in the upcoming Program and Budget review (fiscal year 2016), the working capital fund activities will submit information that will provide more clarity to minimum and maximum cash requirements. Finally, DOD indicated that cash levels and potential shortfalls will be monitored closely, and if an unfavorable trend develops, appropriate actions will be taken. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Secretary of the Air Force, the Under Secretary of Defense (Comptroller), and the Commander, USTRANSCOM. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9869 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. To determine to what extent the Air Force Working Capital Fund (AFWCF) monthly cash balances were within the Department of Defense (DOD) minimum and maximum cash requirements for fiscal years 2009 through 2013, we (1) obtained the DOD regulation on calculating the minimum and maximum cash requirements, (2) calculated the cash requirements for fiscal years 2009 through 2013 based on the regulation, and (3) obtained monthly cash balances for fiscal years 2009 through 2013. We compared the minimum and maximum cash requirements to the end-of-month reported cash balances. If the cash balances were above the maximum amount or were below the minimum amount, we met with Air Force and United States Transportation Command (USTRANSCOM) officials and reviewed AFWCF budgets and other Air Force and USTRANSCOM documentation to ascertain the reasons. In addition, we performed a walk-through of the Defense Finance and Accounting Service’s (DFAS) processes for reconciling the Department of the Treasury (Treasury) trial balance monthly cash amounts for the AFWCF to the balances reported on the AFWCF cash management reports. Further, to determine the extent cash transfers for fiscal years 2009 through 2013 resulted in the AFWCF cash balances either falling below the minimum cash requirement or being above the maximum cash requirement, we (1) analyzed DOD budget and accounting reports to determine the dollar amount of transfers made for fiscal years 2009 through 2013 and (2) obtained journal vouchers from DFAS that documented the dollar amount of the cash transfers. We analyzed cash transfers to determine if any of the transfers resulted in the cash balances falling outside the minimum or maximum cash requirements and, if so, the amount outside those requirements. We also obtained and analyzed reprogramming documents and journal vouchers and interviewed key Air Force and USTRANSCOM officials to determine the reasons for the transfers. To determine to what extent the AFWCF projected monthly cash balances were within the minimum and maximum cash requirements for fiscal years 2014 and 2015 and if not why, we obtained and analyzed AFWCF budget documents and cash management plans for the 2 fiscal years. We used the DOD regulation to calculate the minimum and maximum cash requirements for each of those years and compared it to the projected cash balances. If the projected cash balances were above or below the cash requirement, we met with Air Force and USTRANSCOM officials to ascertain the reasons. Further, we interviewed Office of the Under Secretary of Defense (Comptroller), Air Force, USTRANSCOM, and DFAS officials on the initiative to begin receiving daily cash balances in fiscal year 2015 and the potential effect on the management of the AFWCF cash. We also interviewed Air Force and USTRANSCOM officials to determine what actions the AFWCF plans to take to increase collections or decrease disbursements to avoid potential AFWCF cash shortages. We obtained the AFWCF financial data in this report from official budget documents and accounting reports. To assess the reliability of these data, we (1) reviewed and analyzed the factors used in calculating the minimum and maximum cash requirements for the completeness of the elements included in the calculation; (2) interviewed Air Force, USTRANSCOM, and DFAS officials knowledgeable about the cash data; (3) compared AFWCF cash balance information, including collections and disbursements that were contained in different reports, to ensure that the data reconciled; (4) obtained an understanding of the process used by DFAS to reconcile AFWCF cash balances with Treasury records; and (5) obtained and analyzed documentation supporting the amount of funds transferred in and out of the AFWCF. On the basis of procedures performed, we have concluded that these data were sufficiently reliable for the purposes of this report. We performed our work at the headquarters of the Office of the Under Secretary of Defense (Comptroller) and the Office of the Secretary of Air Force in Washington, D.C.; Air Force Materiel Command at Wright-Patterson Air Force Base, Ohio; USTRANSCOM at Scott Air Force Base, Illinois; and DFAS in Columbus, Ohio. We conducted this performance audit from July 2013 to July 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Over the 5-year period from fiscal years 2009 through 2013, our analysis of Air Force Working Capital Fund (AFWCF) data showed that the number of months each year (1) above the maximum cash requirement generally decreased and (2) below the minimum cash requirement generally increased. AFWCF officials stated that AFWCF monthly cash balances generally decreased at the beginning of the fiscal year when the Department of Defense (DOD) operated under continuing resolutions because AFWCF customers’ funding was constrained, which, in turn, suppressed customer demand for AFWCF goods and services. Beginning in the April and May time frame each year, AFWCF monthly cash balances generally increased because the AFWCF generated more revenue from customers for spare parts that result from the Air Force flying more training mission hours during the spring and summer months because of better weather conditions. This appendix contains the year-to-year analysis of the reasons for fluctuations in the AFWCF cash balances from fiscal years 2009 through 2013. Actual AFWCF cash balances fluctuated each year because of the dollar amounts that were collected by, disbursed by, appropriated to, and transferred to or from the AFWCF. The AFWCF ended fiscal year 2009 with a cash balance of $1,409 million—$25 million more than the beginning balance. Our analysis of financial documents and interviews with AFWCF officials identified three reasons for the net increase in the fiscal year 2009 cash balance. First, the AFWCF collected about $200 million more than it disbursed because the rates charged to customers of the Consolidated Sustainment Activity Group (CSAG) and the Supply Management Activity Group-Retail for flying hours were higher than the anticipated costs. Second, the AFWCF received about $76 million in direct appropriations to fund the transportation of fallen heroes killed in military operations and to pay for medical and dental war reserve materials.transferred $251 million from the AFWCF to other DOD appropriations in Third, the Air Force the last quarter of fiscal year 2009. If the $251 million had not been transferred, the AFWCF fiscal year 2009 ending cash balance would have been $1,660 million or $500 million above the maximum cash requirement. The AFWCF ended fiscal year 2010 with a cash balance of $945 million— $464 million less than the fiscal year 2010 beginning balance of $1,409 million. The ending cash balance for fiscal year 2010 was between the fiscal year 2010 minimum and maximum cash requirements. Our analysis of financial documents and interviews with AFWCF officials identified three reasons for the net decrease in the fiscal year 2010 cash balance. First, AFWCF disbursements exceeded collections by $1,054 million primarily because (1) CSAG reduced customer billings associated with the Air Force flying hour program in July and August 2010 to return gains to its customers collected in prior years and (2) the United States Transportation Command reduced its rates for transportation services, relieved the Air Force of its requirement to fund the Airlift Readiness Account, and waived the military services’ overseas contingency operations funding requirement (known as the cash recovery charge) because of high balances in the Transportation Working Capital Fund (TWCF) cash account in fiscal year 2010. Second, the AFWCF transferred $337 million to other DOD appropriations in fiscal year 2010. Specifically, (1) $250 million was transferred to the Air Force operation and maintenance appropriation to compensate for an equivalent reduction in that appropriation, which was described as excess AFWCF cash; (2) $47 million was transferred to the Air Force military personnel appropriation to cover shortages in the account; and (3) $40 million was transferred to the Defense Logistics Agency in support of a process improvement initiative. Third, offsetting some of these reductions, the AFWCF received direct appropriations of about $927 million that increased the AFWCF cash balance. The appropriations were for TWCF and CSAG fuel price increases, medical and dental war reserve materiel, and support for the transportation of fallen heroes. The AFWCF ended fiscal year 2011 with a cash balance of $1,026 million—$81 million more than the fiscal year 2011 beginning balance of $945 million. The ending cash balance for fiscal year 2011 was between the fiscal year 2011 minimum and maximum cash requirements. For the fiscal year, financial documents showed that disbursements exceeded collections by about $3 million. The AFWCF ending cash balance was higher because the AFWCF received direct appropriations of about $84 million in fiscal year 2011 for medical and dental war reserve material, support for the transportation of fallen heroes, and a TWCF container deconsolidation project. The AFWCF ended fiscal year 2012 with a cash balance of $811 million— $215 million less than the fiscal year 2012 beginning balance of $1,026 million. The ending cash balance for fiscal year 2012 was below the minimum cash requirement. Our analysis of financial documents identified three reasons for the net decrease in the fiscal year 2012 cash balance: (1) AFWCF collections exceeded disbursements by about $78 million; (2) the AFWCF received direct appropriations of $77 million for medical and dental war reserve material, support for the transportation of fallen heroes, and a TWCF container deconsolidation project; and (3) $370 million was transferred from the AFWCF to the Air Force operation and maintenance appropriation in the last quarter of fiscal year 2012 to fund overseas contingency operations requirements. According to a DOD financial document, the transfer would reduce available cash, but available cash would be sufficient to support AFWCF disbursements. The AFWCF ended fiscal year 2013 with a cash balance of $1,458 million—$647 million more than the fiscal year 2013 beginning balance of $811 million or $344 million above the maximum cash requirement. Our analysis of financial documents and interviews with AFWCF officials identified three reasons for the net increase in the fiscal year 2013 AFWCF cash balance. First, collections exceeded disbursements by $250 million. Second, the AFWCF received $56 million in direct appropriations for medical and dental war reserve material and support for the transportation of fallen heroes. Finally, $341 million was transferred from the Air Force aircraft procurement appropriation to the AFWCF in August and September 2013. In addition to the contact named above, Greg Pugnetti (Assistant Director), Steve Donahue, Keith McDaniel, and Hal Santarelli made key contributions to this report.
The AFWCF earned revenue of $21.2 billion in fiscal year 2013 by, among other things, (1) repairing aircraft and engines; (2) selling inventory items (parts); and (3) providing air, land, and sea transportation. Cash generated from the sale of goods and services is used by the AFWCF to cover its expenses, such as paying employees. As requested, GAO reviewed issues related to AFWCF cash management. GAO's objectives were to determine to what extent (1) the AFWCF monthly cash balances were within the DOD minimum and maximum cash requirements for fiscal years 2009 through 2013 and (2) the AFWCF projected monthly cash balances were within the minimum and maximum cash requirements for fiscal years 2014 and 2015 and if not why. To address these objectives, GAO reviewed relevant DOD cash management guidance, analyzed AFWCF actual and projected cash balances and related data, and interviewed Air Force and United States Transportation Command officials. GAO's analysis of Air Force Working Capital Fund (AFWCF) cash data showed that monthly cash balances fell within the minimum and maximum cash requirements about one-third of the time in fiscal years 2009 through 2013. GAO identified three reasons why monthly cash balances were above the maximum or below the minimum cash requirements. First, the cash balance began fiscal year 2009 above the maximum requirement and generally remained above the maximum requirement in fiscal years 2009 and 2010 because the AFWCF charged more than it cost for spare parts. Second, cash balances fluctuated each year because of the cyclical nature of events. For example, in the spring and summer months, the Air Force flies more training missions, which increases revenue for parts and thus the cash balance. Finally, large-dollar transactions caused cash balances to fluctuate above and below cash requirements. These transactions were used to increase cash to pay for costs such as fuel price increases or reduce cash if it was above the maximum requirement. AFWCF projected monthly cash balances are expected to fall within cash requirements about 25 percent of the time in fiscal years 2014 and 2015. In managing cash for those fiscal years, the AFWCF faces three challenges: The AFWCF plans to implement a Department of the Treasury initiative to provide daily cash balances, instead of monthly balances, in October 2014. Because daily balances are more volatile, the AFWCF faces a greater risk that a cash shortfall would occur. However, the Department of Defense (DOD) has not updated its regulation on receiving daily cash balances. Because airlift rates are set to compete with private sector rates, they do not cover the full cost. The difference between the full cost and revenue received is to be provided by the Airlift Readiness Account (ARA) funded by the Air Force. The projected cash balance declines in fiscal year 2015 because the Air Force did not fully fund the ARA by $927 million. If a cash shortfall materializes, the Air Force stated its intent to fund the requirement from other programs. Without sufficient ARA funding, the AFWCF cash balance is at risk of falling below the minimum cash requirement in fiscal year 2015. The AFWCF has included $620 million in savings from Air Force and United States Transportation Command initiatives in its fiscal year 2015 projected monthly cash balances. If these saving are not realized, the Air Force may need to take action to reduce the risk of a cash shortfall. GAO is making three recommendations to DOD that are aimed at implementing the Department of the Treasury's daily cash balance initiative and ensuring that the AFWCF receives the appropriate funding if a cash shortfall occurs because of a lack of ARA funding or estimated savings not being realized. DOD concurred with GAO's recommendations and cited related actions planned or under way.
Congress created the research tax credit in 1981 to encourage businesses to do more research. The credit has never been a permanent part of the Internal Revenue Code (IRC). Since its enactment on a temporary basis in 1981, the credit had been extended 13 times, often retroactively. There was only a 1-year period (between June 30, 1995, and July 1, 1996) during which the credit was allowed to lapse with no retroactive provision upon reinstatement. Most recently, the credit was extended through December 31, 2009. The basic design of the credit has been modified or supplemented several times since its inception. For tax years ending after December 31, 2006, through December 31, 2008, IRC Section 41 allowed for five different credits. Three of the credits, the regular research credit, the alternative incremental research credit (AIRC), and the alternative simplified credit (ASC), rewarded the same types of qualified research and are simply alternative computational options available to taxpayers. Each taxpayer could claim no more than one of these credits. (For purposes of this report we use the term research credit when referring collectively to these options.) The AIRC option was repealed beginning January 1, 2009, while the ASC and regular research credit are available through the end of 2009. The other two separate credits, the university basic research credit and the energy research credit are targeted to more specific types of research and taxpayers that qualified could claim them in addition to the research credit. This report does not address those separate credits. Both the definition of research expenses that qualify for the credit and the incremental nature of the credit’s design are important in targeting the subsidy to increase the social benefit per dollar of revenue cost. In order to earn the research credit a taxpayer has to have qualified research expenses (QREs) in a given year and those expenses have to exceed a threshold or base amount of spending. The IRC defines credit eligibility in terms of both qualifying research activities and types of expenses. It specifies the following four criteria that a research activity must meet in order to qualify for purposes of the credit: The activity has to qualify as research under IRC section 174 (which provides a separate expensing allowance for research), which requires that an activity be research in the “experimental or laboratory sense and aimed at the development of a new product.” The research has to be undertaken for the purpose of discovering information that is technological in nature. The objective of discovering the information has to be for use in the development of a new or improved business component of the taxpayer. Substantially all of the research activities have to constitute elements of a process of experimentation for a qualified purpose. The IRC also specifies that only the following types of expenses for in- house research or contract research would qualify: wages paid or incurred to employees for qualified services; amounts paid or incurred for supplies used in the conduct of qualified amounts paid or incurred to another person for the right to use computers in the conduct of qualified research; and in the case of contract research, 65 percent of amounts paid or incurred by the taxpayer to any person, other than an employee, for qualified research. Spending for structures, equipment, and overhead do not qualify. In addition, the IRC identifies certain types of activities for which the credit cannot be claimed, including research that is conducted outside of the United States, Puerto Rico, or any other U.S. conducted after the beginning of commercial production of a business component; related to the adaptation of an existing business component to a particular customer’s requirements; related to the duplication of an existing business component; related to certain efficiency surveys, management functions, or market research; in the social sciences, arts, or humanities; or funded by another entity. As will be discussed in a section below, the practical application of the various criteria and restrictions specified in the IRC has been the source of considerable controversy between IRS and taxpayers. The research credit has always been an incremental subsidy, meaning that taxpayers earn the credit only for qualified spending that exceeds a defined base amount of spending. The purpose of this design is to reduce the cost of providing a given amount of incentive. Figure 1 illustrates the difference between an incremental credit and two common alternative designs for a subsidy—a flat credit and a capped flat credit. In the case of the flat credit a taxpayer would earn a fixed rate of credit, 20 percent in this example, for every dollar of qualified spending. The taxpayer’s total qualified spending consists of the amount that it would have spent even if there were no subsidy, plus the additional or “marginal” amount that it spends only because the credit subsidy is available. The subsidy encourages additional spending by reducing the after-tax cost of a qualified research project and, thereby, increasing the project’s expected profitability sufficiently to change the taxpayer’s investment decision from no to yes. The subsidy provided for the marginal spending is the only portion of the credit that affects the taxpayer’s research spending behavior. The remainder of the credit is a windfall to the taxpayer for doing something that it was going to do anyway. In the case of a capped credit, the taxpayer earns a fixed rate of credit on each dollar of qualified spending up to a specified limit. If, as in the example shown in figure 1, the credit’s limit is less than the amount that the taxpayer would have spent anyway, all of the credit paid is a windfall and no additional spending is stimulated because no incentive is provided at the margin. In contrast, the objective of an incremental credit is to focus as much of the credit on marginal spending while keeping the amount provided as a windfall to a minimum. The last example in figure 1 shows the case of an ideal incremental credit—one for which the base of the credit (the amount of spending that a taxpayer must exceed before it can begin earning any credit) perfectly measures the amount of spending that the taxpayer would have done anyway. This credit maintains an incentive for marginal spending but eliminates windfall credits, substantially reducing the credit’s revenue cost. Alternatively, the savings from the elimination of windfalls could be used to increase the rate of credit on marginal spending. The primary differences across the research credit computation options are in (1) how the base spending is defined and (2) the rate of credit that is then applied to the difference between current-year QREs and the base amounts. The box below shows the detailed computation rules for each option. Alternative Computation Options for the Research Tax Credit (Before Restrictions) Credit = 20% × , where base QREs equal the greater of [the sum of QREs for 1984 to 1988 / the sum of gross receipts for 1984 to 1988] × average gross receipts for the 4 tax years immediately preceding the current one, or 50% × current-year QREs. [This is known as the minimum base amount.] The ratio of QREs to gross receipts during the historical base period is known as the fixed base percentage (FBP). A maximum value for the FBP is set at 16 percent. Also, special “start-up” rules exist for taxpayers whose first tax year with both gross receipts and QREs occurred after 1983, or that had fewer than 3 tax years from 1984 to 1988 with both gross receipts and QREs. The FBP for a start-up firm is set at 3% for a firm’s first 5 tax years after 1993 in which it has both gross receipts and QREs. This percentage is gradually adjusted so that by the 11th tax year it reflects the firm’s actual experience during its 5th through 10th tax years. Credit = 14% × [current-year QREs - 50% × average QREs in the 3 preceding tax years] If a taxpayer has no QREs in any of its 3 preceding tax years, then the credit is equal to 6% of its QREs in the current tax year. (discontinued as of January 1, 2009) The IRC requires that taxpayers reduce the amount of their deductions for research expenses under section 174 by the amount of research credit that they claim. Alternatively, the taxpayer can elect to claim a reduced credit, equal to 65 percent of the credit that it otherwise would have been able to claim. The research credit is a component of the general business credit and, therefore, is subject to the limitations that apply to the latter credit. Specifically, the general business credit is generally nonrefundable, except for the provisions of section 168(k)(4), so if the taxpayer does not have a sufficient precredit tax liability against which to use the credit in the current tax year, the taxpayer must either carry back some or all of the credit to the preceding tax year (if had a tax liability that year), or carry the credit forward for use in a future tax year. Unused general business credits may be carried forward up to 20 years. When Congress originally enacted the research credit in 1981, it included rules “intended to prevent artificial increases in research expenditures by shifting expenditures among commonly controlled or otherwise related persons.” Without such rules, a corporate group might shift current research expenditures away from members that would not be able to earn the credit due to their high base expenditures to members with lower base expenditures. A group could, thereby, increase the amount of credit it earned without actually increasing its research spending in the aggregate. Under the IRC, for purposes of determining the amount of the research credit, the qualified expenses of the same controlled groups of corporations are aggregated together. The language of the relevant subsection specifically states that: 1. All members of the same controlled group of corporations shall be treated as a single taxpayer,6 and 2. The credit (if any) allowable under this section to each such member shall be its proportionate share of the qualified research expenses and basic research payments giving rise to the credit. Congress directed that Treasury regulations drafted to implement these aggregation rules be consistent with these stated principles. As discussed in a later section, some tax practitioners say that Treasury’s regulations on this issue are unnecessarily burdensome. One of the key measures that we will use to compare credit designs is the marginal effective rate (MER) of the credit, which quantifies the incentive that a credit provides to marginal spending and which can be simply stated as MER = change in the credit benefit / marginal qualified research expenses (QREs) The definition of a “controlled group of corporations” for purposes of the credit has the same meaning as used in determining a parent -subsidiary controlled group of corporations for the consolidated return rules except the aggregate rule is broader, substituting corporations that are greater than 50 percent owned for 80 percent owned corporations. The aggregation rules also apply to trades or businesses under common control. A trade or business is defined as a sole proprietorship, a partnership, a trust or estate or a corporation that is carrying on a trade or business. however, one factor that reduces the MER for all credit earners, regardless of the design, is the offset of the credit against the section 174 deduction for research spending (or the alternative election of the reduced credit amount) mentioned earlier. For corporations subject to the top corporate income tax rate of 35 percent, this offset effectively reduces the regular credit’s MER from 20 percent to 13 percent and the ASC’s MER from 14 percent to 9.1 percent. Another factor that reduces the MER of many taxpayers is the fact that they do not have sufficient tax liabilities to use all of the credits they earn in the current year. When a taxpayer cannot use the credit until sometime in the future, the present value of the credit decreases according to the taxpayer’s discount rate. For example, if the taxpayer has a discount rate of 5 percent and must delay the use of $1 million of credit for three years, the present value of that credit is reduced to approximately $864,000. Such a delay, therefore, would reduce the regular credit’s MER from 13 percent to about 11.2 percent. This delay in the use of the credit also reduces the present value of the revenue cost to the government. In the remainder of this report we make a distinction between the amount of net credit (after the section 174 offset) that taxpayers earn for a given tax year and the credit’s discounted revenue cost, which reflects delays in the use of credits. Unless otherwise specified, we use the term revenue cost to refer to the discounted revenue cost. . research spending and is defined as the percentage change in total QREs divided by the percentage change in the price of a unit of research. If the average MER were 5 percent and the price elasticity were -1, then the credit would increase total QREs by 5 percent. The next step in the computation is to apply the percentage increase to the amount of aggregate qualified spending that would have been done without the credit in order to determine the total amount of spending stimulated by the credit. Finally, the bang-per-buck can be estimated by dividing the total amount stimulated by the credit’s revenue cost. In this study, we provide some estimates of the credit’s weighted average MER and revenue cost, as well as estimates of the aggregate amount of qualified research spending. We have not estimated the price elasticity of research spending and the available estimates from past empirical research leave considerable uncertainty regarding the size of that elasticity. Nevertheless, as can be seen in figure 2, for any value of the price elasticity, a credit design that provides the same weighted average MER as another design, but at a lower revenue cost, should provide a higher bang-per-buck than that other credit. Therefore, comparing different designs on the basis of their MER and revenue cost should be equivalent to comparing them on the basis of their bang-per-buck. To fully assess the research credit’s value to society, more than just the amount of spending stimulated per dollar of revenue cost would have to be examined. A comparison would have to be made between (1) the total benefits gained by society from the research stimulated by the credit and (2) the estimated costs to society resulting from the collection of taxes required to fund the credit. The social benefits of the research conducted by individual businesses include any new products, productivity increases, or cost reductions that benefit other businesses and consumers throughout the economy. Although most economists agree that research spending can generate social benefits, the effects of the research on other businesses and consumers are difficult to measure. We are not aware of any studies that have empirically estimated the credit’s net benefit to society. Although more than 15,000 corporate taxpayers claimed research credits each year from 2003 through 2005, a significantly smaller population of large corporations (those with business receipts of $1 billion or more) claimed most of the credit during this period. In 2005, 549 such corporations accounted for about 65 percent of the $6 billion of net credit claimed that year (see figure 4 and table 3 in appendix II). Even within the population of large corporations credit use is concentrated among the largest users. The 101 corporations in our panel database in 2004 accounted for about 50 percent of the net credit claimed that year. Corporations with business receipts of $1 billion or more accounted for an even larger share—about 70 percent—of the $131 billion of total QREs reported by credit claimants for 2005. In 2005 approximately 69 percent of QREs were for wages paid to employees engaged in qualified research activities. Almost all of the remaining QREs were for supplies used in research processes (about 16 percent) and for contract research (about 15 percent). Prior to the introduction of the ASC in 2006, taxpayers that used the regular credit accounted for the majority of QREs and an even larger majority of the research credit claimed. In 2005, regular credit users reported about 75 percent of all QREs and claimed about 90 percent of total research credits. (See figure 5 in appendix II.) Their share of total credits was larger than their share of total QREs because the regular credit rules were more generous than those of the AIRC for taxpayers who could qualify for the former. Most of the regular credit users were subject to the 50-percent minimum base, which, as we will explain in a later section, had a significant effect on the MER they received from the credit. The lack of current tax liabilities was another factor that affected the MERs of many credit claimants. In 2005, 44 percent of total net credits earned could not be used immediately. (See figure 6 in appendix II.) By taking into account factors, such as which credit a taxpayer selected, whether it was subject to a minimum base, and whether it could use its credit immediately, we were able to estimate MERs for all of the credit claimants represented in SOI’s corporate database (see appendix I for details). These individual estimates allowed us to compute a weighted average MER for all taxpayers. We also estimated the discounted cost to the government of the credits that all taxpayers earned. These estimates, along with data on total QREs, permitted us to estimate the bang-per-buck of the credit for 2003 through 2005 for alternative assumptions about the price elasticity of research spending. (See table 4 in appendix II.) Our estimate of the overall MER in 2005 ranged between 6.4 percent and 7.3 percent, depending on assumptions about discount rates and the length of time before taxpayers could use their credits. Our estimates of the discounted revenue cost were also sensitive to these assumptions and ranged between $4.8 billion and $5.8 billion. The bang-per-buck estimates were not sensitive to these particular assumptions; however, they were quite sensitive to the price elasticity assumptions. If the elasticity was -0.5, the bang-per-buck for 2005 would have been about $0.80. If the elasticity was -2, the bang-per-buck would have been about $3.00. Data on amended claims filed by our panel of large corporations indicate that, in the aggregate, these amendments increased the amount of credit claimed by between 1.5 percent and 5.4 percent (relative to the amounts claimed on initial returns) for each tax year from 2000 through 2003. (See tables 5 through 8 in appendix II.) The credit increase through amendments for tax year 2004 was only 0.5 percent. Data from IRS examinations of these large corporations indicate that examiners recommended changes that, in the aggregate, would have decreased credits claimed by between 16.5 and 27.1 percent each tax year from 2000 through 2003. (See tables 9 through 12 in appendix II.) The lower percentage change of 9 percent for 2004 reflects, in part, the fact that audits for that tax year had not progressed as far as those for the earlier years. Changes of these magnitudes raise the question of how much credit taxpayers actually expected to receive when they filed their claims and, more important, when they were making their research spending decisions for the years in question. These expectations are critical because they are what affect the taxpayer’s decisions, not the amounts of credit actually received well after the decisions have been made. For those taxpayers that do not expect to file amendments and do not expect IRS to change their credits, the amounts claimed on their original returns should be the best estimate of their expectations. For taxpayers that know they may be stretching the rules with some of the expenses they are trying to claim as QREs, their post-exam credit amounts may be better estimates of their expectations. In other cases, given the lack of clarity in certain aspects of the definitions of both QREs and gross receipts, taxpayers may be uncertain whether they will receive any credit for particular research projects. Such uncertainty reduces the credit’s effective incentive. The regular credit provides a higher average MER for a given revenue cost than does the current ASC; however, over time, the historically fixed base of the regular credit becomes a very poor measure of the research spending that taxpayers would have done anyway. As a result, the benefits and incentives provided by the credit become allocated arbitrarily and inequitably across taxpayers, likely causing inefficiencies in resource allocation. As we noted earlier, an ideal incremental credit would reward marginal research spending but not any spending that a taxpayer would have done anyway. In reality, it is impossible for policymakers to know how much research spending taxpayers would have done without the credit. Any practical base that can be designed for the credit will only approximate the ideal base with some degree of inaccuracy. The primary base for the regular credit (except for start-up companies) is determined by a taxpayer’s spending behavior that occurred up to 25 years ago (see the computation rules on page 7). There is little reason to believe that, in most cases, the ratio of research spending to gross receipts from that long ago, when multiplied by the taxpayer’s most recent 4-year average of gross receipts, would accurately approximate the ideal base for that taxpayer. Most credit claimants received substantial windfalls. Regular credit claimants subject to the 50 percent minimum base represented about 71 percent of all claimants in 2005 (see figure 5 in appendix II). More than half of the credit such claimants earned was a windfall. Even the highest elasticity estimates and the largest possible MER (which together should produce the largest increase in research spending) indicate that spending increases due to the credit represent less than 15 percent of the total research spending of these claimants. Since regular credit users subject to the 50 percent minimum base receive a credit for half of their research spending, the credit for marginal spending is less than half of the credit they receive. Inaccuracies in the base also cause disparities across taxpayers in both the marginal incentives and windfall benefits that they receive from the credit. Table 1 shows the extent of the disparities across taxpayers that use different credit options and are subject to different constraints. Taxpayers for which bases exceeded their actual spending received no incentive from the credit. Regular credit users whose primary bases were not so inaccurately low that the minimum base took effect received had MERs of 13 percent (if they could use their credits immediately), while those with primary bases so inaccurate that they were subject to the minimum base had their MERs cut to 6.5 percent (again, if they could use their credits immediately). Using the IRS tax data, we estimated that the regular credit users subject to the minimum base received an average effective rate of credit (total credit divided by total spending) more than one and one-half times as large as those who were not subject to the minimum base. The average effective rate includes windfall credits, which the MER does not. This result indicates that, even though the minimum base reduced the credits that taxpayers earned on both their marginal spending and on the spending they would have done anyway, taxpayers subject to the minimum base still received larger windfall credits than those who were not. Meanwhile, AIRC users received significantly lower MERs and average effective credit rates than did either group of regular credit users. Although data are not yet available on credit use after the ASC was introduced, we applied current credit rules to the historical data from our panel of large credit claimants to estimate how many of them would have chosen ASC if it had been available in 2003 and 2004. We found that, if taxpayers had selected the option that provided them with the largest credit amount, most of the panel members would have switched to the ASC, but a significant number would still have claimed the regular credit. ASC users would have accounted for about 62 percent of the panel population’s total QREs and between 56 percent to 60 percent of the revenue cost of all panel members in those years. (See table 13.) Some taxpayers still had MERs over 10 percent while others had negative MERs. The disparate distribution of incentives and windfalls is not only inequitable, it can also result in a misallocation of research spending and economic activity in general across competing sectors. These misallocations may reduce economic efficiency and, thereby, diminish any economic benefits of the credit. An additional significant problem with the regular credit’s base is the difficulty that taxpayers have in substantiating their base computations to the IRS. Many businesses lack the types of records dating to the mid 1980s that are needed to complete these computations with a high degree of accuracy and the substantiation of base QREs has become a leading issue of contention between regular credit users and the IRS. (This problem will be discussed in more detail in a later section.) The base of the ASC continually updates itself; however, an important disadvantage of this updating is that a taxpayer’s current year research spending will increase its base in future years, thereby reducing the amount of credit it earns in those years. Figure 3 illustrates this problem in the case that a taxpayer earns a credit each year but is not subject to the minimum base. For every $1 million of spending increase this year, the taxpayer’s base in each of the next 3 years would increase by $166,667. These base increases reduce the amount of credit that the taxpayer can earn in each of the next 3 years by $15,167, for a combined total of $45,500. As a result, the actual benefit that the taxpayer receives for increasing this year’s spending is cut in half, and the MER is reduced to 4.6 percent. If the taxpayer anticipated that its future spending would decline so much that it would not be able to earn any credit in the next 3 years, then there would be no negative future consequences from increasing this year’s spending and the MER would be 9.1 percent. However, if a taxpayer does not expect to exceed its base in the current year, even after increasing its spending by a marginal amount, but plans to increase its future spending enough to earn credits in the future years, then it would receive no current benefit for that marginal spending. The taxpayers would still suffer the negative effects in the future years, meaning that, in this case, the MER would actually be negative. Given that the ASC base is only one-half of the taxpayer’s past 3 years’ average spending, most research-performing companies should be able to earn some credit every year, which was an important reason why this option was introduced. However, the low base is likely to be below most taxpayer’s ideal base and some are likely to earn credit on substantial amounts of research spending that they would have done anyway. There currently is no minimum base for the ASC to limit the amount of windfall credit that taxpayers can earn. Only the lower credit rate (14 percent vs. 20 percent for the regular credit) contains the cost of these windfalls. By applying the credit rules that existed immediately prior to the introduction of the ASC to the historical data for our panel of corporations and, then, applying the rules that existed in 2009, we were able to compare how these taxpayers would have fared under the different sets of options available. If we assumed a relatively low discount rate and short length of carryforward (for those who could not use their credits immediately), then the estimated weighted average MER for our panel prior to the introduction of the ASC ranged between 7.4 percent and 8.3 percent, depending on which years of data we used and whether the data related to before or after amendments and IRS exams. If the ASC option had been available to these corporations and they chose the credit option that provided them the largest amount of credit, we estimate that their weighted average MER would have been between 5.6 percent and 6.3 percent. (See table 14 in appendix II.) This decline in the MER would have been accompanied by an increase in the revenue cost of the credit of between about 17 percent and 29 percent. These results indicate that the introduction of the ASC lowered the bang-per-buck of the credit. The availability of the new option would not have reduced any taxpayer’s windfall credit, but it would likely have increased the windfalls of some. Those taxpayers that would have switched from the regular credit to the ASC are likely to have seen their MERs decline, while those who switched from the AIRC may have seen their MERs increase or decrease. Our estimates are based on an analysis of a fixed population of corporations; it does not reflect the effects of the likely increase in the number of taxpayers claiming the credit thanks to the lower base of the ASC. The addition of these new claimants likely would have reduced the credit’s bang-per-buck further because they would all have the lower MERs provided by the ASC. The MERs of these taxpayers would be higher than the zero MERs they faced before the ASC was available; however, the revenue cost of providing them with the credit, which also was zero previously, would have increased as well. The problems we identified with the base of the regular credit can be addressed by either (1) eliminating the regular credit option or (2) retaining the regular credit but updating its base so that the distribution of credit benefits and incentives across taxpayers would be less uneven and arbitrary. Under either of these approaches the primary bases for all taxpayers would be linked to their recent spending behavior, rather than decades-old behavior. The recent behavior is likely to be more closely correlated with their ideal bases than the older behavior would be. The results of our simulations (summarized in the top portion of table 2) indicate that both of these changes would have approximately the same effect because, in each case, all of the corporations in our panel would use the ASC. (Details of our results are presented in tables 15 and 16 in appendix II.) Under the first change, the ASC would be the only option available; under the second change, all of the taxpayers would receive larger amounts of credits under the ASC than under the regular credit (except for those that could not earn either credit), so they would voluntarily choose the ASC. In both cases, if the rate of the ASC is kept at 14 percent, both the average MER and the revenue cost would decrease, but the percentage decrease in the average MER in most cases would be at least twice as large, meaning that the credit’s bang-per-buck would decrease. If the rate of the ASC were raised to 20 percent, the average MER would increase relative to existing rules under most combinations of assumptions, but the revenue cost would increase to a much larger extent, again, meaning that the bang-per-buck would decrease. No clear purpose would be served by retaining both the ASC and a regular credit whose base would be updated almost as frequently as that of the ASC. If the bases for both of the options were linked to recent spending behavior, there would be no rationale for providing taxpayers with different rates of credit under two options. Moreover, once taxpayers began to expect regular updates of the base, the expected negative effects on future credits would lower the MER of the regular credit in the same way that they do for the ASC. One potential compromise between a frequently updated base that significantly reduces the credit’s bang-per- buck and a fixed base that causes distorting disparities is to have a base that is updated only in those cases where it has become evidently far out of line for individual taxpayers. For example, taxpayers that spend less than 75 percent of their base amount for the regular credit could be given the option of using a more recent period of years for computing their fixed base percentage. Taxpayers at the other extreme—those subject to the current minimum base—could be required to use a more recent base period. Taxpayers between these two extremes would not have their bases updated, which means that, if they are not close to the minimum base, they would not face negative future effects. However, one significant problem with this approach is that it would give taxpayers who are close to being subject to the minimum base an extremely large disincentive to increase their spending. In addition, the taxpayers without updated bases would still face the substantial recordkeeping difficulties that are discussed in a later section. Results from simulations based on our panel database suggest that adding a minimum base to the ASC is likely to improve its bang-per-buck. The effects of adding a minimum base vary, depending on whether both the ASC and regular option are retained, or only the former. These variations are summarized in the lower portion of table 2 and further details are provided in tables 17, 18 and 19 in appendix II. Under most combinations of assumptions that we examined, when an ASC is the only option available, an ASC with a 50-percent minimum base could provide the same average MER as an ASC without a minimum base, but at a lower revenue cost. In all but one unlikely case, the reductions in discounted revenue cost ranged between 1.5 percent and 18 percent with most exceeding 3 percent. Revenue savings would be achieved regardless of whether the rate of the ASC is 14 percent or 20 percent. We also examined the effects of adding a 75-percent minimum base; however, under almost all assumptions we found the revenue savings to be less than or equal to those gained by adding a 50-percent minimum base. If both the ASC with a 14-percent rate and the regular credit with a 20- percent rate and an updated base are available, the addition of a minimum base to the ASC would cause some taxpayers to prefer the regular credit over the ASC. Those regular credit users would have higher MERs than they would have had under the ASC, so the average MER would be higher if both options were available. Those users’ credit amounts would also be higher; however, the percentage differences in their credits would be smaller than the percentage differences in their MERs (see tables 18 and 19), meaning that the credit’s bang-per-buck would be slightly higher. However, this advantage in terms of bang-per-buck would come at the cost of providing unequal incentives across taxpayers without a rationale. In addition to examining the effects of adding a minimum base to the ASC we also simulated the effects of increasing the credit’s base rate (i.e., having the base equal to 75 percent or 100 percent of a taxpayer’s 3-year moving average of spending, rather than 50 percent as under current rules). We found that these changes would significantly increase the percentage of our panel corporations that have negative MERs. A well-targeted definition of QREs (and IRS’s ability to enforce the definition) can improve the efficiency of the credit to the extent that it directs the subsidy toward research with high external benefits and away from research with low external benefits. By focusing the subsidy in this manner, the definition can increase the amount of social benefit generated per dollar of tax subsidy provided through the credit. Specifying a definition that serves this purpose and that is also readily applied by both IRS and taxpayers has proven to be a challenge for both Congress and the Department of the Treasury. There are numerous areas of disagreement between IRS and taxpayers concerning what types of spending qualify for the research credit. These disputes raise the cost of the credit to both taxpayers and IRS and diminish the credit’s incentive effect by making the ultimate benefit to taxpayers less certain. Many of the tax practitioners we interviewed had a common general complaint that IRS examiners often demanded that the research activities result in a higher standard of innovation than required by either the IRC or Treasury regulations. The IRS officials we interviewed disagreed with these assertions and referred to language from their Research Credit Audit Technique Guide that instructs examiners on the relevant language from current regulations. Both practitioners and IRS officials acknowledged that some controversies arise because language in the IRC and regulations does not always provide a bright line for identifying qualified activities. For example, one qualification requirement is that the research must be intended to eliminate uncertainty concerning the development or improvement of a business component. The regulations say that uncertainty exists “if the information available to the taxpayer does not establish the capability or method for developing or improving the business component, or the appropriate design of the business component.” An IRS official said that examiners could use clarification of the meaning of “information available to the taxpayer,” while a practitioner noted that the regulations do not say what degree of improvement in a product is required for the underlying research to be considered qualified. The practitioner said that research for improvements is more difficult to get qualified than research for new products. Several particularly contentious issues relate to specific types of research activities or expenses, including the following: The definition and qualification standards for internal-use software (IUS). Research relating to the development of software for the taxpayer’s own internal use is generally excluded from qualified research, unless it meets an additional set of standards that are not applied to other research activities. The IRC provides Treasury the authority to specify exceptions to this exclusion but Treasury did not address this issue when it published final research credit regulations in 2004. Treasury pointed to the significant changes in computer software and its role in business activity since the mid-1980s (when the IUS exclusion was added to the IRC) as making it difficult to determine how Congress intended the new technology to be treated. Meanwhile, tax practitioners complain that IRS continues to consider most software development expenditures in the services industry to be IUS. Some commentators have questioned whether there is still an economic rationale for distinguishing between IUS and software used for other purposes, given that innovations in software can produce spillover benefits regardless of whether the software is sold to third parties. IRS officials say that eliminating the distinction would significantly increase the revenue cost of the credit but they doubt that it would simplify administration. They believe that a bright-line definition of IUS, such as that contained in 2001 proposed regulations, is the only practical approach for dealing with this issue. The development of IUS regulations has been included in all of Treasury’s priority guidance plans since the issue was left out of the final research credit regulations; however, Treasury officials have not indicated when they are likely to be issued or what stand they are likely to take. Late-stage testing of products and production processes. Treasury regulations provide that “the term research or experimental expenditures does not include expenditures for the ordinary testing or inspection of materials or products for quality control (quality control testing).” However, the regulations clarify that “quality control testing does not include testing to determine if the design of the product is appropriate.” Some tax consultants told us that IRS fairly consistently disqualifies research designed to address uncertainty relating to the appropriate design of a product. One of them said that IRS rejected testing activities simply on the basis of whether the testing techniques, themselves, were routine. IRS officials said that they typically reject testing that is done after the taxpayer has proven the acceptability of its production process internally. They noted that there is no bright line between nonqualifying ordinary quality control testing and qualified validation testing. These determinations are made on a case-by-case basis for each activity. The official also said that they have disagreements with taxpayers over when commercial production begins and suggested that this is one area where some further clarification in regulations might help. Product testing is a particularly important issue for software development, which in general (not just IUS) is another area of significant contention between IRS and taxpayers. Direct supervisory and support activities. Qualified research expenses include the wages of employees who provide direct supervision or direct support of qualified research activities. The practitioners we interviewed said that it is extremely difficult to get IRS to accept that higher level managers are often involved in research and the direct supervision of research. Many of their clients have flat organizational structures and the best researchers are often given higher titles so that they can be paid more. They say that IRS often rejects wage claims simply on the basis of job titles. IRS officials told us that wages of higher level managers could be eligible for the credit; however, the burden of proof is on the taxpayer to substantiate the amount of time that those managers actually spent directly supervising a qualified activity. Regarding the issue of direct support, some commentators would like IRS’s guidance to more clearly state that activities such as bid and proposal preparation (at the front end of the research process) and development testing and certification testing (at the final stages of the process) are qualified support activities that do not have to meet specific qualification tests themselves, as long as the activities that they support already qualify as eligible research. IRS officials told us that they would like better guidance on this issue and were concerned that some taxpayers want to include the wages of anyone with any connection at all to the research, such as marketing employees who attend meetings to talk about what customers want. Supplies. The IRC specifically excludes expenditures to acquire depreciable property from eligibility for either the deduction of research expenditures under section 174 or for the research credit. Taxpayers have attempted to claim the deduction or the credit for expenditures that they have made for labor and supplies to construct tangible property, such as molds or prototypes, that they used in qualified research activities. IRS has taken the position that such claims are not allowed (even though the taxpayers do not, themselves, take depreciation allowances for these properties) because the constructed property is of the type that would be subject to depreciation if a taxpayer had purchased it as a final product. IRS also says that it is also improper for taxpayers to include indirect costs in their claims for “self-constructed supplies,” even when the latter are not depreciable property. Taxpayers are challenging IRS’s position in at least one pending court case. Both taxpayers and IRS examiners would like to see clearer guidance in this area. Treasury has had a project to provide further guidance under section 174 in its priority guidance plans since at least 2005 but the guidance has not yet been issued. IRS has also been concerned with the extent to which taxpayers have attempted to recharacterize ineligible foreign research services contracts as supply purchases. For taxpayers claiming the regular research credit the definition of gross receipts is important in calculating the “base amount” to which their current-year QREs are compared. The definition also was critical for determining the amount of credit that taxpayers could earn with the AIRC. (Even though this credit option is no longer available, a decision regarding the definition of gross receipts will affect substantial amounts of AIRC claims that remain in contention between taxpayers and IRS for taxable years before 2009.) Gross receipts do not enter into the computation of the ASC or the basic research credit. If the regular credit is eliminated, this becomes a nonissue for future tax years, but the consequences for taxpayers and the revenue cost to the government from past claims will be substantial (particularly as a result of the extraordinary repatriation of dividends in response to the temporary incentives under IRC section 965). The principal issue of contention between taxpayers and IRS is the extent to which sales and other types of payments among members of a controlled group of corporations should be included in that group’s gross receipts for purposes of computing the credit. Neither the IRC nor regulations are clear on this point and IRS has issued differing legal analyses in specific cases over the years. IRS’s current interpretation of the credit regulations that generally exclude transfers between members of controlled groups is that it applies only to QREs and not to gross receipts; consequently, all intragroup sales should be included when computing a group’s total gross receipts. This option would eliminate any double-counting of QREs but could overstate the resources available to the group by double-counting sales and income payments between group members. However, going to the other extreme and excluding all intragroup transactions from the group’s total gross receipts could exclude a large share of the export sales of U.S. multinational corporations (those made to foreign affiliates for subsequent resale abroad) from gross receipts. This result would favor regular credit users whose export sales have increased as a share of their total sales and disfavor users whose export shares have declined. These disparities in the credit benefits across taxpayers serve no useful purpose. An intermediate alternative would be to exclude all transactions between controlled group members except for intermediate sales by U.S. members to foreign members. This approach would not discriminate among taxpayers on the basis of whether they export their products or sell them domestically because it would include all sales that are effectively connected with the conduct of a trade or business within the United States in a group’s gross receipts. This option would also eliminate any double- counting of intragroup transfers in gross receipts, which is important if Congress wishes to continue using gross receipts as a measure of the resources available to corporations. Neither the IRC nor Treasury regulations contain specific recordkeeping requirements for claimants of the research credit. However, claimants are subject to the general recordkeeping rules of IRC section 6001 and Treasury regulations section 1.6001, applicable to all taxpayers, that require them to keep books of account or records that are sufficient to establish the amount of credit they are claiming. In the case of the research credit, a taxpayer must provide evidence that all of the expenses for which the credit is claimed were devoted to qualified research activities, as defined under IRC section 41. Section 41 requires that the qualification of research activities be determined separately with respect to each business component (e.g., a product, process, or formula), which means that the taxpayer must be able to allocate all of its qualified expenses to specific business components. Moreover, the taxpayer must be able to establish these qualifications and connections to specific components not only for the year in which the credit is being claimed, but also for all of the years in its base period. There were wide difference in opinions between the IRS examiners and the tax practitioners we interviewed regarding what methods are acceptable for allocating wages between qualifying and nonqualifying activities. Practitioners noted that IRS prefers project accounting but, in its absence, used to accept cost center or hybrid accounting; however, in recent years, IRS has been much less willing to accept claims based on the latter two approaches. They also said that IRS examiners now regularly require contemporaneous documentation of QREs, even though this requirement was dropped from the credit regulations in 2001. Some practitioners suggested that the changes in IRS’s practices came about because examiners were having difficulty determining how much QREs to disallow in audits when they found that a particular activity did not qualify. Others said that IRS does not want to devote the considerable amounts of labor required to review the hybrid documentation. The IRS officials we interviewed said that more taxpayers have or had project accounting than was suggested by the tax practitioners. The officials said that the consultants ignored these accounts because they boxed them in (in terms of identifying qualified research expenses). In their view the high-level surveys and interviews of managers or technical experts from the business, which many taxpayers try to use as evidence, are not a sufficient basis for identifying QREs. The officials noted that sometimes consultants conduct interviews for one tax year and then extrapolate their results to support credit claims for multiple earlier tax years. IRS officials have been particularly concerned with the quality of late or amended filings of credit claims. In April 2007, IRS designated “research credit claims” as a Tier I compliance issue because of the volume and difficulty of auditing these claims. In announcing the designation IRS noted that a growing number of credit claims were based on marketed tax products supported by studies prepared by the major accounting and boutique tax advisory firms. IRS officials expressed concern that when taxpayers submit amendments to their IRS Forms 6765, they often do so late in an audit after IRS has already spent significant time reviewing the initial claims. In many cases the taxpayers settle for 50 cents on the dollar as soon as IRS challenges a claim. Although most of the tax practitioners we interviewed acknowledged that there was a proliferation of aggressive and sometimes sloppy research credit claims, they pointed to many legitimate reasons for companies to file claims on amended returns, including long-standing uncertainties and changes in the research tax credit regulations. The practitioners say that IRS’s standards are stricter than Congress intended and what has been allowed in recent court cases. IRS disagrees and says its administrative practices are consistent with the court rulings. The burden of substantiating research credit claims represents a significant discouragement to potential credit users; however, the flexibility in substantiation methods that many practitioners seek could help some taxpayers claim larger credits than those to which they are entitled. Although some taxpayers, particularly those for which research activities constitute a large proportion of their total operations, are able to meet the recordkeeping standards that IRS is currently enforcing, many taxpayers would find it extremely burdensome to meet these requirements. One consulting firm told us that they recently tried to shift all of their clients to project accounting. This effort was successful; however, it was extremely difficult for the businesses. Other practitioners said that many taxpayers simply would not take on such an effort just to claim the credit. Allowing taxpayers to allocate their expenses between qualified and nonqualified activities after the fact and, in part, on the basis of oral testimony of the taxpayers’ experts would be less burdensome for businesses than requiring contemporaneous time accounting by type of activity and by specific project. However, the experts would have an incentive to overstate the proportion of labor costs identified as QREs and IRS would have no way to verify these oral estimates. Treasury and IRS face a difficult trade-off between, on the one hand, increasing taxpayer compliance burdens and deterring some taxpayers from using the credit and, on the other hand, accepting overstated credit claims. All of the difficulties that taxpayers face in substantiating their QREs are magnified when it comes to substantiating QREs for the historical base period (1984 through 1988) of the regular credit. Taxpayers are required to use the same definitions of qualified research and gross receipts for both their base period and their current-year spending and receipts. However, many firms do not have good (if any) expenditure records dating back to the early 1980s base period and are unable to precisely adjust their base period records for the changes in definitions promulgated in subsequent regulations and rulings. Taxpayers also have great difficulty adjusting base period amounts to reflect the disposition or acquisition of research-performing entities within their tax consolidated groups. Some practitioners would like to see some flexibility on IRS’s part in terms of base period documentation. They noted that in cases where a taxpayer’s records are missing or otherwise lacking, courts have permitted taxpayers to prove the existence and amount of expenditure through reasonable estimation techniques. The IRS officials we interviewed said that estimates are allowable only if the taxpayer clearly establishes that it has engaged in qualified research and that its estimates have a sufficiently credible evidentiary basis to ensure accuracy. One official noted that IRS not likely to question a taxpayer’s base amount if the latter uses the maximum fixed base percentage; however, he did not think that IRS would have the authority to say that taxpayers could take that approach without showing any records at all for the base period. Neither IRS nor Treasury officials we interviewed saw any administrative problems arising if the IRC were changed to relieve taxpayers of the requirement to maintain base period records if they used the maximum fixed base percentage. Treasury regulations provide that elections to use the ASC or the AIRC must be made on an original timely filed return for the taxable year and may not be made on a late filed return or an amended return. Some commentators on the regulations have questioned the need for such limitations on taxpayers’ ability to make the elections, which they note the IRC does not specify. These commentators see no reason why taxpayers who do not claim a credit until they file an amended return are permitted to claim the regular credit but not the ASC. They also believe that taxpayers should be allowed to change their election if, as a result of an audit, IRS adjusts the amount of QREs or base QREs in a manner which would make an alternative election more advantageous to the taxpayer. Treasury officials whom we interviewed said the legal “doctrine of election” indicates that taxpayers must remain committed to their choice once they have made their credit election. If taxpayers are unhappy with the form of credit, they can choose another form for the following tax year. Allowing taxpayers to elect different forms of the credit on amended returns in response to an audit in order to maximize their credit would create administrative burdens for IRS. IRS officials agreed that permitting changes in credit elections could require examiners to audit some taxpayers’ credits twice; however, they saw no problem with allowing taxpayers to claim either alternative credit on an amended return if the taxpayer had not previously filed a regular credit claim for the same tax year. Taxpayers that fail to claim the research credit on timely filed tax returns are materially disadvantaged by the election limitations that apply to any subsequent claims they file on amended returns. There appears to be no reason to prohibit taxpayers from electing either the ASC or AIRC method of credit computation on an amended return for a given tax year, as long as they have not filed a credit claim using a different method on an earlier return for that same tax year. Under current Treasury regulations, the controlled group of corporations must, first, compute a “group credit” by applying all of the credit computational rules on an aggregate basis. The group must then allocate the group credit amount among members of the controlled group in proportion to each member’s “stand-alone entity credit.” The stand-alone entity credit means the research credit (if any) that would be allowed to each group member if the group credit rules did not apply. Each member must compute its stand-alone credit according to whichever method provides it the largest credit for that year without regard to the method used to compute the group credit. The consultants with whom we discussed this issue agreed that the rules were very burdensome for those groups that are affected because it forces all of their members to maintain base period records for the regular credit, even if they would like to use just the ASC. Some very large corporate groups must do these computations for all of their subsidiaries, which could number in the hundreds, and they have no affect on the total credit that a group earns. Treasury maintains that a single, prescribed method is necessary to ensure the group’s members collectively do not claim more than 100 percent of the group credit. Treasury also maintains that the stand-alone credit approach is more consistent with Congress’s intent to have an incremental credit than is the gross QRE allocation method that others have recommended. In specifying that controlled groups be treated as single taxpayers for purposes of the credit Congress clearly wanted to ensure that a group, as a whole, exceeded its base spending amount before it could earn the credit. It is not clear that Congress was concerned that each member has an incentive to exceed its own base. The reason for having a base amount is to contain the revenue cost of the credit by focusing the incentive on marginal spending. In the case of controlled groups the cost is controlled at the group level; whether individual members exceed their own bases has no bearing on the cost of the credit. If the choice between two allocations methods does not affect the revenue cost, then the remaining questions follow: 1. Does one of the methods provide a greater incentive to increase research spending? 2. Is one significantly less burdensome to taxpayers and IRS? For groups in which individual members determine their own research budgets, neither the stand-alone credit allocation method nor the gross QRE allocation method is unequivocally superior in terms of the marginal incentives that they provide to individual members. Each of the two methods performs better than the other in certain situations that are likely to be common among actual taxpayers. Data are not available that would allow us to say whether one of the methods would result in higher overall research spending than the other. For those groups in which the aggregate research spending of all members is determined by group-level management, the only way that the allocation rules can affect the credit’s incentive is if they allow the shifting of credits from members without current tax liabilities to those with tax liabilities. If the group credit is computed according to the method that yields the largest credit, then an additional dollar of spending by any group member will increase the group credit by the same amount, regardless of how the group credit total is allocated among members. The gross QRE allocation method is much less burdensome for controlled groups and for IRS than the stand-alone method because it does not require anyone to maintain base-period records for the regular credit, unless they choose to use that credit themselves. If the regular credit were eliminated, the burden associated with the stand-alone method would be reduced considerably; however, it would still require more work on the part of taxpayers and IRS than would the gross QRE method. Two significant concerns arise from the lack of any update of the regular credit’s base since it was introduced in 1989. First, the misallocation of resources that can result from the uneven distribution of both marginal incentives and windfall benefits across taxpayers could lead to missed opportunities for the country to benefit from research projects with higher social rates of return. Second, the requirement to maintain detailed records from the 1980s, updated for subsequent changes in law and regulations, represents a considerable compliance burden for regular credit users (including some that are required to use that option). Regular updates of the base would substantially reduce these problems; however, no clear purpose would be served by retaining both the ASC and a regular credit, the base of which would be updated almost as frequently as that of the ASC. Unfortunately, neither of the problems can be avoided without a reduction in the credit’s bang-per-buck. The addition of a minimum base to the ASC would likely improve the bang-per-buck of that credit (the extent would depend on certain estimating assumptions) and also reduce inequities in the distribution of windfall credits. The research credit presents many challenges to both taxpayers and IRS. In a number of areas, current guidance for identifying QREs does not enable claimants or IRS to make bright-line determinations. In some of these areas further clarification is possible; in others ambiguity may be difficult to reduce. In some cases, drawing lines that make the definition of QREs more liberal would likely result in the credit being less well- targeted to research with large spillover benefits to society. Instead, the credit would be shifted toward a broader subsidy for high-tech jobs or manufacturing in general. Documenting and verifying that particular expenses are qualified for the credit involve considerable resource costs on the part of taxpayers and IRS. Moreover, widespread disagreements between IRS and taxpayers over the adequacy of documentation leave many taxpayers uncertain about the amounts of credit they will ultimately receive. Recordkeeping burdens may discourage some taxpayers from using the credit and the uncertainty reduces the credit’s effective incentive. Relaxing recordkeeping requirements would alleviate these problems; however, there remains a risk that such a relaxation could significantly increase the amount of credit provided for spending of questionable merit. Despite the current wide gap between the views of taxpayers and IRS, there may be opportunities to reduce certain burdens without opening the door to abuse. At a minimum, an organized dialogue among Treasury, IRS, and taxpayers should be able to reduce some uncertainty over what types of documentation are acceptable. In order to reduce economic inefficiencies and excessive revenue costs resulting from inaccuracies in the base of the research tax credit, Congress should consider the following two actions: Eliminating the regular credit option for computing the research credit. Adding a minimum base to the ASC that equals 50 percent of the taxpayer’s current-year qualified research expenses. If Congress nevertheless wishes to continue offering the regular research credit to taxpayers, it may wish to consider the following three actions to reduce inaccuracies in the credit’s base and to reduce taxpayers’ uncertainty and compliance costs and IRS’s administrative costs: Updating the historical base period that regular credit claimants use to compute their fixed base percentages. Eliminating base period recordkeeping requirements for taxpayers that elect to use a fixed base percentage of 16 percent in their computation of the credit. Clarifying for Treasury its intent regarding the definition of gross receipts for purposes of computing the research credit for controlled groups of corporations. In particular it may want to consider clarifying that the regulations generally excluding transfers between members of controlled groups apply to both gross receipts and QREs and specifically clarifying how it intended sales by domestic members to foreign members to be treated. Such clarification would help to resolve open controversies relating to past claims, even if the regular credit were discontinued for future years. In order to allow more taxpayers to benefit from the reduced recordkeeping requirements offered by the ASC option, the Secretary of the Treasury should take the following two actions: Modify credit regulations to permit taxpayers to elect any of the computational methods prescribed in the IRC in the first credit claim that they make for a given tax year, regardless of whether that claim is made on an original or amended tax return. Modify credit regulations to allow controlled groups to allocate their group credits in proportion to each member’s share of total group QREs, provided that all group members agree to this allocation method. In order to significantly reduce the uncertainty that some taxpayers have about their ability to earn credits for their research activities, the Secretary of the Treasury should take the following six actions: Issue regulations clarifying the definition of internal-use software. Issue regulations clarifying the definition of gross receipts for purposes of computing the research credit for controlled groups of corporations. Issue regulations regarding the treatment of inventory property under section 174 (specifically relating to the exclusion of depreciable property and indirect costs of self-produced supplies). Provide additional guidance to more clearly identify what types of activities are considered to be qualified support activities. Provide additional guidance to more clearly identify when commercial production of a qualified product is deemed to begin. Organize a working group that includes IRS and taxpayer representatives to develop standards for the substantiation of QREs that can be built upon taxpayers’ normal accounting approaches, but also exclude practices IRS finds of greatest threat to compliance, such as high-level surveys and claims filed long after the end of the tax year in which the research was performed. We provided a draft of this report to the Secretary of Treasury and the Commissioner of IRS in September 2009. In written comments the Acting Assistant Secretary (Tax Policy) agreed that the credit’s structure could be simplified or updated in certain respects to improve its effectiveness. He also agreed that the issuance of guidance relating to the definition of gross receipts, the treatment of inventory property under section 174, and the definition of internal use software will enhance the administration of the credit and Treasury plans to provide additional guidance in the next few months. The Acting Assistant Secretary said that the Administration’s priority is to make the credit permanent. His letter is reprinted in appendix VIII. Treasury and IRS officials also provided technical comments that we have addressed as appropriate. As we agreed with your offices, unless you publicly announce the contents of this report, we plan no further distribution of it until 30 days from the date of this letter. This report is available at no charge on GAO’s web site at http://www.gao.gov. If you or your staff have any questions on this report, please call me at (202) 512-9110 or [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VIII. If a taxpayer’s marginal spending in the current tax year leaves its total qualified spending above its base spending (but not equal to two or more times the base amount determined by its fixed base percentage), the marginal benefit the taxpayer receives from the regular credit equals: 0.2 × 0.65 × marginal spending, The factor of 0.65 reflects the fact that the taxpayer must either elect to reduce its credit by 35 percent or reduce the size of its section 174 deduction for research spending by the amount of the credit. In either case, for taxpayers subject to the typical 35 percent corporate income tax rate, the benefit of the credit is reduced by 35 percent. In addition, if the taxpayer cannot use all of its credit in the current tax year or carry it back to use against last year’s taxes, then the net present value of the benefit is reduced according to the following formula: Discounted benefit = (0.2 × 0.65 × marginal spending) × (1 + r)-y0 , where r is the taxpayer’s discount rate and y0 is the number of years before the taxpayer is able to use the credit. If a taxpayer’s marginal spending in the current tax year leaves its total qualified spending equal to two or more times the base amount determined by its fixed base percentage, the discounted marginal benefit the taxpayer receives from the regular credit equals: (0.1 × 0.65 × marginal spending) × (1 + r)-y0, because each additional dollar of spending raises the taxpayer’s base by 50 cents. Consequently, the taxpayer’s benefit is effectively cut in half. If the taxpayer’s total current-year spending is less than its base spending (even after the marginal spending), then Current benefit = 0. Under the alternative simplified credit (ASC) a taxpayer may receive a benefit in the current tax year by spending additional (also known as marginal) amounts on qualified research in that year. However, this additional spending also reduces the potential tax benefits that the taxpayer can earn in the 3 succeeding years. The marginal effective rate (MER) measures the net present value of the current tax benefit and the reductions in future tax benefits resulting from the firm’s additional spending on research, all as a percentage of the additional spending. If the taxpayer’s total current-year spending is greater than its base spending, then Current benefit = 0.14 × 0.65 × marginal spending × (1 + r)-y0. If the taxpayer’s total current-year spending is less than its base spending (even after the marginal spending), then Current benefit = 0. Given that the base spending amount for the next tax year equals half of the taxpayer’s average research spending in the current year and the 2 immediately preceding years, the marginal spending in the current year can reduce the value of the credit benefit the taxpayer can earn next year as follows: Benefit reduction next year = -(1/3) × 0.5 × 0.65 × 0.14 × current- year marginal spending × (1 + r)-y1. The value of y1 equals 1 if the credit that the taxpayer loses in the next year could have been used that year. If that lost credit could not have been used until a later year anyway, then y1 equals the number of years between the current tax year and the year in which the lost credit could have been used. If the taxpayer’s total qualified spending next year is less than its base spending (even after the marginal spending), then Benefit reduction next year = 0. Benefit reductions in the second and third years into the future are computed in a similar manner. Combining all of the effects described above yields the following formula for a taxpayer that exceeds its base spending every year: MER = {0.091 × marginal spending × [(1 + r)-y0 – (1/6) × (1 + r)-y1 – (1/6) × (1 + r)-y2 - (1/6) × (1 + r)-y3]} / marginal spending. If a taxpayer’s total qualified spending is less than its base spending in any of the four years covered by this formula, then the “(1 + r)” term associated with that year would be set equal to zero. To compute the discounted revenue cost we first compute the net credit (after the offset against the section 174 deduction or the election of a reduced credit) that each taxpayer would earn under existing or hypothetical credit rules, based on their current qualified research expenses (QREs), base QREs, and if relevant, gross receipts. We then use data from each taxpayer’s Form 3800 to estimate the amount, if any, of research credit that the taxpayer could use immediately and the amount, if any, that it had to carry forward to future years. In cases where the credit had to be carried forward, we used ranges of assumptions for both discount rates and number of years carried forward (see sensitivity discussion below) to discount the value of credit amounts used in future years. We based our estimates of credit use by the full population of corporate taxpayers on the Statistics of Income (SOI) Division’s sample of corporate tax returns for 2003, 2004, and 2005. For 2003 and 2004 we were able to fill in some data that were missing for a few large credit claimants by using data we obtained from Internal Revenue Service (IRS) examiners for our panel database. For all 3 years we adjusted the data for members of controlled groups to avoid the double counting of QREs and gross receipts (see discussion below for further detail). We began the construction of our panels by selecting all corporations that met either of the following criteria: The corporation’s total QREs had to account for at least 0.2 percent of aggregate QREs for all firms in SOI’s annual samples for either 2003 or 2004; or The corporation’s total grossed-up credit (meaning prior to any reduction under section 280(c)) had to account for at least 0.2 percent of aggregate grossed-up credits for all firms in SOI’s annual samples for either 2003 or 2004. We attempted to obtain a complete set of tax returns from 2000 through 2004 for each corporate taxpayer that met our panel criteria for either 2003 or 2004. In addition, we tried to keep the scope of each corporate taxpayer over the 5 years to be as consistent as possible with that taxpayer’s scope as of 2003 and 2004. (This consistency is important because we wanted the 5-year history of QREs for each taxpayer to closely represent the spending histories that they would actually have used for computing their moving-average base expenditures if the ASC had been in place for 2003 and 2004.) We constructed time series records for each taxpayer by linking the data from the taxpayer’s returns from 2000 through 2004 by the Employer Identification Number (EIN) that SOI included in each year’s tax return record. In some cases a taxpayer’s time series was reported under more than one EIN over the period. This discontinuity usually occurred in cases of a corporate reorganization, such as a merger or spin-off. In cases where we did not find a complete 5-year set of tax returns for one of the EINs selected into our panel, we searched to see if we could find the missing returns under a different EIN. We focused our search on cases where taxpayers had reported substantial amounts of research credits or QREs for tax years early in our period and then they stopped appearing in SOI’s corporate sample (because they stopped filing a return under their initial EIN). For example, we examined the cases of taxpayers that filed returns in 2000 and 2001 and then stopped filing returns to see if they were related to cases in our panel for which we were missing tax returns for those 2 years. If the companies that stopped filing returns were not related to any companies for which we were missing returns, we then checked to see if they were related to any other members of our panel (because they might have been merged into an ongoing corporation that kept the same EIN before and after the merger). Conversely, if the panel member for which we were missing early-year tax returns did not match up with any cases that had stopped filing after those years, we checked to see if that panel member had been spun off of any other panel member (meaning that it was once included in the consolidated tax return of the other panel member and than was either sold off or became deconsolidated and filed its own return). We did a similar examination for companies that showed dramatic changes in the level of their QREs from one year to the next. We extended our search for potential merger and spin-off candidates to any companies in the annual SOI samples that accounted for at least 0.1 percent of either QREs or grossed up credit in any year from 2000 through 2004. In this manner we identified a number of pairs of taxpayers that combined with or split off from one another during our panel period. We could usually confirm these corporate changes from publicly available information on the Internet, but we also had the IRS examiners review our linkages. In order to ensure that we did not miss any significant mergers or splits among our panel members, we asked the Large and Mid-Sized Business (LMSB) Division examiners that reviewed each case to identify any that we may have missed. We made the following adjustments to ensure the consistency of spending histories in cases where we had identified significant corporate reorganizations within our panel members: In cases in which one of our panel members in 2003 or 2004 encompassed an entity that had filed its own tax return in an earlier year during the panel period, we added the QREs that the former return filer had reported for that year to the QREs that our panel member had reported in the same year (because those QREs of the formerly separate entity would be included in the panel member’s moving average base amount under the ASC). In cases in which one of our panel members in 2003 or 2004 had sold a subsidiary or spun off some other entity that had been included in its consolidated tax return in an earlier year of our panel period. We subtracted the estimated QREs of that spun-off entity from the panel member’s QREs for that earlier year. (We assumed that the spun-off entity’s share of total QREs in the earlier year was the same proportion as the following ratio: The spun-off entity’s QREs in the first year that it filed its own return, divided by the sum of the spun-off companies QREs plus the QREs of the corporation from which it had been spun off.) By making these adjustments, we were able to create reasonably consistent spending histories for those cases where we had identified (on our own or with the assistance of IRS examiners) significant corporate reorganizations in our panel population. In a number of cases we concluded that we did not have sufficient information to construct reliably consistent time series and we, therefore, dropped those cases from our panel. Although we believe that we have accounted for all major mergers and splits within our panel members, we cannot be sure that we have accounted for all smaller acquisitions or dispositions that may have affected the consistency of the individual spending histories within the panel. For this reason, we ran a sensitivity analysis in which we examined the effects on our results of altering the relationship between current and base QREs for each taxpayer (see below). Taxpayers that are subject to the group credit rules are required to file their own Form 6765 on which they report their group’s aggregate values for QREs, base QREs, and gross receipts; however, the credit amount reported on each member’s form is that member’s share of the total group’s credit. (See appendix VII for an explanation of how these shares are computed.) Whether or not a member can actually use a group credit depends on its own tax position for the year, not on an aggregated group tax position. We used several indicators to identify potential group credit claimants, based on the reporting requirements described above. First, for claimants of the regular credit we computed the ratio of the amount of credit they claimed, divided by the difference between their current QREs and their base QREs. If this ratio was a value other than 0.13 or 0.2, we flagged the case as a potential group member. Second, for claimants of the alternative incremental research credit (AIRC), we computed the ratio of the credit they actually claimed over the amount of credit that they could have claimed if all of the QREs and gross receipts reported on their 6765 were their own. If this ratio was other than 1 or 0.65, we flagged the case as a potential group claimant. Third, we also searched the SOI databases for groups of cases that reported the same exact amounts of QREs in a given year. For the purpose of calculating the ASC for group members we gave each member of a group the group’s aggregate spending history and gross receipts history; however, each member had its own amount of research credit claimed and its own values for the variables taken from the general business credit form. In order to avoid double-counting (or more) the QREs of the groups or giving them too much weight when computing our weighted average effective rates of credit, we created a variable named CREDSHR, which we then used to assign each group member only a fraction of the group’s total QREs or weighting in the effective rate calculation. The value of CREDSHR for each group member is equal to the ratio of the amount of research credit that the member claimed over the aggregated amount of credit that the group would be able to claim, based on the group’s aggregated QREs and base QREs or gross receipts. In other words, we gave each member a share of the group’s QREs that was proportionate to its share of the group’s total credit. Although this allocation method is not precisely derived from the group credit allocation regulations, it should yield a close approximation of the true distribution of QREs across group members. We do not have the detailed attachments to Form 6765 that show exactly what each group member’s QREs and gross receipts were. In most cases the sum of CREDSHR for all members of a group in our panel population was approximately equal to 100 percent. When the sum did not reach 100 percent we assumed that there are other members who were not represented in the SOI sample for a given year. The absence of these missing members does not affect the validity of the computations for the group members we had; it simply means that the missing members were treated as any other company that did not meet the criteria for inclusion in our panel. Because some taxpayers in the panel belonged to controlled groups that together determined the amount of qualified spending in 2003 or 2004, we adjusted for the composition of these groups when we assembled the panel. In particular, spending and other variables were adjusted to hold constant the group’s composition in 2003 or 2004, the 2 years for which credit was computed. This was accomplished in several ways. First, the SOI data allowed us to identify certain controlled groups from duplications in the amount of reported spending. Second, we researched mergers, acquisitions and dispositions for these firms from 2000 through 2004, or the years for which we constructed the panel. Third, we requested confirmation of our knowledge about these controlled groups from LMSB, in addition to any other information about the groups’ composition that LMSB might have had. Clearly, constructing the panel involved balancing trade-offs between the number of users and the availability of data. We tested the sensitivity of our results to variations in assumptions or observations concerning the following factors: Future credit status—The MER for the ASC depends, in part, on whether the taxpayer anticipates being able to earn the credit in each of the next 3 years and, if so, whether that taxpayer would be subject to a minimum base constraint. In order to predict the status for a given taxpayer in a given future year, we needed to predict, within a certain range, the ratio of spending in that year to the average of spending for the 3 years preceding that year. Our baseline prediction was that the probability of a taxpayer moving from one particular ratio range into another specific ratio range was equal the probability of such a move that we observed in our historical data. We used Markov chains of probabilities to predict changes in status two and three years into the future. In our sensitivity analysis, we examined 12 alternative sets of probabilities. For example, in one alternative all taxpayers were less likely to move into a higher range of ratios than they would have been with the observed probabilities. Switching probabilities—In choice scenarios, we were required to estimate the probability of switching from one credit to another in future years, which has the potential to influence the effect of research spending in 2003 or 2004 on the amount of credit earned in subsequent years for which data are not available. In our sensitivity analysis, we allowed the probability of switching from the ASC to the Regular Credit from one year to the next to be higher or lower than our baseline estimate (which was based on simulated behavior from 2003 to 2004). We did the same for the probability of switching from the Regular Credit to the ASC from one year to the next, and we incorporated all four possible combinations of deviations from the baseline. Discount rate—At higher rates of discount, credit that is carried forward to be claimed in subsequent years is worth less in present value terms in 2003 or 2004. Additionally, at higher discount rates, the effect of spending in 2003 or 2004 on the amount of credit earned in subsequent years is mitigated, since credit earned in subsequent years is worth less in present value terms in 2003 and 2004 at higher rates of discount. In our sensitivity analysis, we allowed the discount rate to vary between 4 percent and 8 percent. Carryforward length—The model required an assumption about the number of years that credit would be carried forward. (The Research Tax Credit stipulates that credit that cannot be claimed in the year in which it is earned may be carried forward for up to 20 years.) Lacking data on carryforward patterns, we based our assumption about the length of the carryforward period on behavior that was “observed” as part of the simulation. For example, in some cases we could simulate the taxpayer’s carryforward status in both 2003 and 2004. If this taxpayer were observed to carry forward credit in both years as part of this simulation, it would have a longer carryforward period than if it were observed to carry forward credit in one year or the other, or if it were observed not to carry credit forward at all. In our sensitivity analysis, we allowed the longest carryforward period to vary between 2 and 10 years in length. The relationship between current and base QREs—We tested how our estimates for the ASC would differ if the spending histories for our panel corporations were significantly different from what we observed. To do this, we estimated what the MERs and discounted revenue costs would be if the ratio of each taxpayer’s current QREs to base QREs were 10 percent higher and 10 percent lower than the observed amounts. Another aspect of our sensitivity analysis involved using of data from different stages in the taxpaying process. We used data from original returns, and from amended and audited returns, where applicable. Net difference between final amendment and initial claim— percentage change Net difference between final amendment and initial claim—percentage change Base QREs (for regular credit claimants not subject to the 50% base limit) Average gross receipts (for those claiming the AIRC) Base QREs (for regular credit claimants not subject to the 50% limit) Average gross receipts (for those claiming the AIRC) Net difference between final amendments and initial claims— percentage change Base QREs (for regular credit claimants not subject to the 50% limit) Average gross receipts (for those claiming the AIRC) Net difference between final amendments and initial claims— percentage change Base QREs (for regular credit claimants not subject to the 50% limit) Average gross receipts (for those claiming the AIRC) Base QREs (for regular credit claimants not subject to the 50% limit) Average gross receipts (for those claiming the AIRC) Net difference between final taxpayer pre-exam claim and latest IRS position—dollar amounts Base QREs (for regular credit claimants not subject to the 50% limit) Average gross receipts (for those claiming the AIRC) Net difference between final taxpayer pre-exam claim and latest IRS position—percentage changes Base QREs (for regular credit claimants not subject to the 50% limit) Average gross receipts (for those claiming the AIRC) Net difference between final taxpayer pre-exam claim and latest IRS position—percentage changes Base QREs (for regular credit claimants not subject to the 50% limit) Average gross receipts (for those claiming the AIRC) Figure 7 presents five examples that illustrate how inaccuracies in the credit’s base cause disparities across taxpayers in both the marginal incentives and windfall benefits that they receive from the credit. In each example the taxpayer would have spent $10 million on qualified research in the current year, even without the credit. Also in each example, the taxpayer is contemplating doing an additional $1 million in spending, but wants to estimate how much of a credit benefit it will receive for that marginal spending before deciding whether to undertake it. What differs across each example is the size of the taxpayer’s base for the regular credit. In the first example the taxpayer’s spending and gross receipts history result in a primary base that is relatively close to its ideal base, being only $1 million below the latter. The taxpayer receives a windfall credit of $130,000 for the $1 million worth of spending that it would have done anyway in excess of its base. The taxpayer would receive an additional $130,000 worth of credit if it increased its spending by $1 million, which represents a marginal effective rate (MER) of 13 percent—- the maximum MER available under the regular credit. The taxpayer’s total credit ($260,000) divided by its total spending ($11 million) equals its average effective rate of credit (about 2.4 percent). In the second example the taxpayer’s primary base exceeds the ideal base by $600,000, which prevents the taxpayer from receiving any windfall credit; however, it also reduces the incentive that the taxpayer has to spend another $1 million on research by cutting the credit on that marginal spending from $130,000 to $52,000, for an MER of 5.2 percent. In the third example the taxpayer’s primary base is well above all of the spending that the taxpayer was contemplating for the year, so the credit provides no incentive for the taxpayer to increase its spending beyond what it would have done anyway. The MER is zero. The fourth example shows what could happen when a taxpayer’s primary base was much too low and if there were no minimum base for the credit. The credit would provide the taxpayer with the same marginal incentive as in the first example; however, the taxpayer’s windfall credit would be nine times larger than in that first case. Finally, the last example shows how the minimum base can reduce the cost of the credit by significantly reducing windfalls in some cases. Unfortunately, this windfall cannot be reduced without also cutting the marginal incentive. Given that the minimum base is 50-percent of current spending, every $1 million of marginal spending increases the base by $500,000, so the taxpayer can earn only $650,000 of credit on that spending, representing an MER of 6.5 percent. ASC users currently are not subject to a minimum base. If they were to be, then the final example in figure 7 shows how that minimum base could affect their current year credits. The minimum base could also affect the negative future-year effects arising from current-year marginal spending (which were illustrated in figure 3). If a taxpayer’s primary base for the ASC would be less than the minimum base in future years, even after accounting for the increase due to current-year marginal spending, then current spending would not cause any reduction in future credits. If the primary base exceeded the minimum base in future years, then the negative future effects would occur, just as they did in the case without a minimum base. In 1986, Congress narrowed the definition of qualified research out of a concern that many taxpayers claiming the credit did not engage in high technology activities and some claimed the credit for virtually any expenditures relating to product development. Currently, research activities must satisfy four tests in order to qualify for the credit: 1. Expenditures connected with the research must be eligible for treatment as expenses under section 174. 2. The research must be undertaken for the purpose of discovering information that is technological in nature. 3. The taxpayer must intend that information to be discovered will be useful in the development of a new or improved business component of the taxpayer. 4. Substantially all of the research activities must constitute elements of a process of experimentation for a purpose relating to a new or improved function, performance, reliability, or quality. These four eligibility criteria are known as the section 174 test, discovering technological information test, business component test, and process of experimentation test. Treasury regulations elaborate on these requirements as follows: Research is undertaken for the purpose of discovering information if it is intended to eliminate uncertainty concerning the development or improvement of a business component. Uncertainty exists if the information available to the taxpayer does not establish the capability or method for developing or improving the business component, or the appropriate design of the business component. A determination that research is undertaken for the purpose of discovering information that is technological in nature does not require the taxpayer be seeking to obtain information that exceeds, expands or refines the common knowledge of skilled professionals in the particular field of science or engineering in which the taxpayer is performing the research; nor does it require that the taxpayer succeed in developing a new or improved business component. (The underlined language, which TD 9104 explicitly rejected, is commonly referred to as “the discovery test” from TD 8930, which many commenters contended was an overly stringent interpretation of the discovering technological information test.) Generally, the issuance of a U.S. patent is conclusive evidence that the research meets the “discovering information” test. However, the issuance of a patent is not a precondition for credit availability. A process of experimentation is designed to evaluate one or more alternatives to achieve a result where the capability or method of achieving that result, or the appropriate design of that result, is uncertain as of the beginning of the taxpayer’s research activities. The process must fundamentally rely on the principles of the physical or biological sciences, engineering or computer science. A process of experimentation is undertaken for a qualified purpose if it relates to a new or improved function, performance, reliability or quality of the business component. Research relating to style, taste, cosmetic, or seasonal design factors does not qualify. The Internal Revenue Code (IRC) identifies the following types of activities that do not qualify as research for purposes of the credit: Any research conducted after the beginning of commercial production of the business component. Any research related to the adaptation of an existing business component to a particular customer’s requirement or related to the reproduction of an existing business component. Efficiency surveys; activity relating to management function; market research, testing or development; routine data collection; routine or ordinary testing or inspection for quality control; or any research in the social sciences, arts or humanities. Except to the extent provided in regulations, any research with respect to computer software which is developed by (or for the benefit of) the taxpayer primarily for internal use by the taxpayer, other than for use in: an activity which constitutes qualified research, or a production process that meets the requirements of the credit. Research conducted outside the United States, the Commonwealth of Puerto Rico, or any possession of the United States. Any research to the extent funded by any grant, contract, or otherwise by another person (or government entity). There are numerous areas of disagreement between IRS and taxpayers concerning what types of spending qualify for the research credit. These disputes raise the cost of the credit to both taxpayers and IRS and diminish the credit’s incentive effect by making the ultimate benefit to taxpayers less certain. General Qualification Tests The tax practitioners we interviewed almost universally told us that Internal Revenue Service (IRS) auditors are still applying the discovery test from Department of the Treasury regulations that were explicitly rejected in subsequent regulations. Some of the tax consultants pointed to language in the regulations saying that the section 174 and process of experimentation tests are met as long as the experimentation addresses uncertainty relating to either the capability or method for developing or improving the product, or the appropriate design of the product. One consultant said IRS examiners have disqualified design and development activities that address these uncertainties because they considered the activities to be “routine development” or “routine engineering.” Officials from IRS’s Large and Mid-Size Business (LMSB) Division whom we interviewed denied that examiners are inappropriately applying the old discovery test and referred to language from their Research Credit Audit Technique Guide that instructs examiners on the relevant language from current regulations. One of the practitioners that complained about the standards used by examiners acknowledged that, if they call in IRS’s Research Credit Technical Advisors, they can get the correct rules applied. Both practitioners and IRS officials acknowledged that some controversies arise because language in the IRC and regulations does not always provide a bright line for identifying qualified activities. For example, one qualification requirement is that the research must be intended to eliminate uncertainty concerning the development or improvement of a business component. The regulations say that uncertainty exists “if the information available to the taxpayer does not establish the capability or method for developing or improving the business component, or the appropriate design of the business component.” An IRS official said that examiners could use clarification of the meaning of “information available to the taxpayer,” while a practitioner noted that the regulations do not say what degree of improvement in a product is required for the underlying research to be considered qualified. The practitioner said that research for improvements is more difficult to get approved as QREs than research for new products. Product testing around the end of the development process is a particularly contentious issue under the section 174 and process of experimentation tests. Treasury regulations provide that “the term research or experimental expenditures does not include expenditures for the ordinary testing or inspection of materials or products for quality control (quality control testing).” However, the regulations clarify that “quality control testing does not include testing to determine if the design of the product is appropriate.” Some tax consultants told us that IRS fairly consistently disqualifies research designed to address uncertainty relating to the appropriate design of a product. One of them said that IRS rejected testing activities simply on the basis of whether the testing techniques, themselves, were routine. IRS officials said that they typically reject testing that is done after the taxpayer has proven the acceptability of its production process internally. They have disagreements with taxpayers over when commercial production begins and suggested that this is one area where some further clarification in regulations might help. Officials from IRS Appeals told us that they could benefit from additional guidance (including industry-specific guidance) in the regulations relating to the process of experimentation test. Product testing is a particularly important issue for software development, which is another area of significant contention between IRS and taxpayers. Many tax consultants and industry groups that we spoke with believe that IRS has a general bias against software development activities qualifying for the credit. For their part, IRS officials believe that the true cause of controversy is taxpayers’ belief in the so-called “per se rule,” which considers all software development to inherently entail a qualifying process of experimentation. The officials note that IRS and the courts have uniformly rejected this notion. IRS’s Audit Guidelines on the Application of the Process of Experimentation for All Software state that, in order for a software development activity to meet the experimentation test, as specified in Treasury regulations, it must do all of the following: address one of the qualified uncertainties; evaluate alternatives; and rely on the principles of computer science. The guidelines identify numerous activities, including the detection of flaws and bugs in software, as “high risk categories of software development,” which usually fail to constitute qualified research. A special subset of controversies relate to software that is considered to have been developed for a taxpayer’s own use. When Congress narrowed the definition of the term “qualified research” in the Tax Reform Act of 1986, it specifically excluded several activities, one of them being the development of computer software for the taxpayer’s own internal use (other than for use in an activity which constitutes qualified research, or a production process that meets the requirements of the credit). The act provided Treasury the authority to specify exceptions to this exclusion; however, the legislative history to the Act states that Congress intended that regulations would make the costs of new or improved internal-use software (IUS) eligible for the credit only if the research satisfies, in addition to the general requirements for credit eligibility, the following three-part test that 1. the software was innovative; 2. the software development involved significant economic risk; and 3. the software was not commercially available for use by the taxpayer. The statutory exclusion for internal-use software and the regulatory exceptions to this exclusion have been the subject of a series of proposed and final regulations (and also considerable controversy). On January 3, 2001, Treasury published final regulations ruling that “software is developed primarily for the taxpayer’s internal use if the software is to be used internally, for example, in general administrative functions of the taxpayer (such as payroll, bookkeeping, or personnel management) or in providing noncomputer services (such as accounting, consulting, or banking services).” If the software was developed primarily for those purposes, it was deemed to be IUS, even if it is subsequently sold, leased or licensed to third parties. This regulation did not provide a specific definition but instead identified two general categories of software as examples of IUS. In response to further taxpayer concerns Treasury reconsidered the positions it took in TD 8930 and issued proposed regulations on December 26, 2001, which stated, among other things, that, unless computer software is developed to be commercially sold, leased, licensed or otherwise marketed, for separately stated consideration to unrelated third parties, it is presumed to be IUS. In publishing both TD 8930 and the proposed regulations Treasury declined to adopt the recommendation of commentators that the definition of IUS should not include software used to deliver a service to customers or software that includes an interface with customers or the public. Financial services and telecommunications companies are among those particularly concerned with this issue. They note that their software systems are integrally related to the provision of services to their customers, yet expenditures to develop those systems would not qualify for the credit (unless they met the additional set of standards) under the “separately stated consideration” standard because they do not charge customers specifically for the use of the software. Several commentators noted that the original treatment of IUS introduced by the 1986 act predated the occurrence of a dramatic shift in computer usage that transformed the US economy from one based on production of tangible goods to one based on services and information. They question whether there is still an economic rationale for making a distinction between IUS and software used for other purposes, given that innovations in software can produce spillover benefits regardless of whether the software is sold to third parties. Some commentators supported their recommendations for a narrower definition of IUS by referring to the legislative history included in the Conference Report accompanying the Tax Relief Extension Act of 1999, which included the following language: The conferees further note the rapid pace of technological advance, especially in service-related industries, and urge the Secretary to consider carefully the comments he has and may receive in promulgating regulations in connection with what constitutes “internal use” with respect to software expenditures. The conferees also wish to observe that software research that otherwise satisfies the requirements of section 41, which is undertaken to support the provision of service, should not be deemed to be “internal use” solely because the business component involves the provision of a service. Tax consultants complain that IRS continues to consider software development expenditures in the services industry to be IUS, despite the guidance Congress provided in the 1999 conference report. Some also say that the lack of clarity in current guidance regarding the characteristics of innovative software has permitted IRS examiners to apply an overly restrictive interpretation of this eligibility requirement. IRS officials told us that some exceptions were added to both TD 8930 and the proposed regulations in response to the conference report. They also note that the report did not suggest that all software providing a service should be excepted from IUS treatment; rather, it suggested that such software not be automatically classified as IUS. Treasury itself acknowledged the changes in computer software and its role in business activity since the mid-1980s in an Advanced Notice of Proposed Rulemaking, which explained why the department was not ready to address the issue of IUS in the final regulations on the research credit that it published in 2004. Treasury said it was concerned about the difficulty of effecting congressional intent behind the exclusion for internal-use software with respect to software being developed today. As an example, it was concerned that the tendency toward the integration of software across many functions of a taxpayer’s business activities may make it difficult for both taxpayers and the IRS to separate internal-use software from non-internal-use software under any particular definition of internal-use software. Even with Congress’s broad grant of regulatory authority to Treasury on this issue, Treasury believed that this authority may not be broad enough to resolve those difficulties. Treasury has not yet been able to publish final regulations relating to IUS; the issue remains on the department’s latest priority guidance plan. In the meantime, for tax years beginning after December 31, 1985, Treasury has allowed taxpayers to rely upon all of the provisions relating to IUS in the proposed regulations or, alternatively, on all of the provisions relating to IUS in TD 8930. However, if taxpayers choose to rely on TD 8930, Treasury required that they also apply the “discovery test” contained in that document. Nonetheless, a recent court decision allowed a taxpayer to rely on TD 8930 for IUS guidance and TD 9104 regarding the discovering technological information test. The Department of Justice has filed a motion for reconsideration on the grounds that the court’s holding is based on a mistake in law. Qualified research expenses include the wages of employees who provide direct supervision or direct support of qualified research activities. Treasury regulations define direct supervision as “the immediate supervision (first-line management) of qualified research.” Direct supervision does not include supervision by a higher level manager. The same section of the regulations provides the following examples of activities that qualify as direct support: the typing of a report describing laboratory results derived from qualified research, the machining of a part of an experimental model, and the cleaning of equipment used in qualified research. The section also provides the following examples of activities that do not qualify: payroll, accounting and general janitorial services. Some practitioners told us that IRS is very stringent with respect to allowing the wages of supervisors higher in the chain of command to be included in QREs. Many of their clients have flat organizational structures and the best researchers are often given higher titles so that they can be paid more. They say that IRS often rejects wage claims simply on the basis of job titles. IRS officials told us that wages higher level managers could be eligible for the credit; however, the burden of proof is on the taxpayer to substantiate the amount of time that those managers actually spent directly supervising a qualified activity. They note that some taxpayers try to include unallowable costs relating to production labor, sales and marketing, information technology personnel, and legal personnel. Some commentators would like IRS’s guidance to more clearly state that activities such as bid and proposal preparation (at the front end of the research process) and development testing and certification testing (at the final stages of the process) are qualified support activities that do not have to meet specific qualification tests themselves, as long as the activities that they support already qualify as eligible research. IRS officials told us that they would like better guidance on this issue and were concerned that some taxpayers want to include the wages of anyone with any connection at all to the research, such as marketing employees who attend meetings to talk about what customers want. According to existing Treasury regulations, activities are conducted after the beginning of commercial production of a business component if such activities are conducted after the component is developed to the point where it is ready for commercial sale or use, or meets the basic functional and economic requirements of the taxpayer for the component's sale or use. The regulations specifically identify the following activities as being deemed to occur after the beginning of commercial production of a business component: A. Preproduction planning for a finished business component; B. Tooling-up for production; C. Trial production runs; D. Trouble shooting involving detecting faults in production E. Accumulating data relating to production processes; and F. Debugging flaws in a business component. The exclusions relating to postcommencement activities apply separately for the activities relating to the development of the product and the activities relating to the development of the process for commercially manufacturing that product. For example, even after a product meets the taxpayer's basic functional and economic requirements, activities relating to the development of the manufacturing process still may constitute qualified research, provided that the development of the process itself separately satisfies the standard eligibility requirements and the activities are conducted before the process meets the taxpayer's basic functional and economic requirements or is ready for commercial use. Some commentators requested clarification of these regulations, suggesting a need for greater flexibility in defining the commencement of commercial production. In particular, they objected to Treasury deeming certain activities, such as preproduction planning, tooling, trial production runs, and debugging flaws, to occur after commencement of production when they often actually occur before the manufacturing process is ready for commercial use. Treasury, as stated in the preamble to the final regulations, believes that “the multitude of factual situations to which these exclusions might apply make it impractical to provide additional clarification that is both meaningful and of broad application.” It also stated that the specific exclusions do not apply to research activities that otherwise satisfy the requirements for qualified research. Some tax consultants claim that IRS disallows research relating to the development of manufacturing processes that should qualify (according to the consultants' interpretation of those regulations). IRS officials acknowledged that they do have disputes with taxpayers regarding when commercial production of a particular product has begun and that their determinations must be based on the facts and circumstances of the particular cases. There is no “bright line” test for when a product is ready for commercial production or when a manufacturing process is no longer being improved. The Internal Revenue Code specifically excludes expenditures to acquire “property of a character subject to the allowance for depreciation” from eligibility for either the deduction of research expenditures under section 174 or for the research credit. Taxpayers have attempted to claim the deduction or the credit for expenditures that they have made for labor and supplies to construct tangible property, such as molds or prototypes, that they used in qualified research activities. IRS has taken the position that such claims are not allowed (even though the taxpayers do not, themselves, take depreciation allowances for these properties) because the constructed property is of the type that would be subject to depreciation if a taxpayer had purchased it as a final product. IRS also says that it is also improper for taxpayers to include indirect costs in their claims for “self-constructed supplies,” even when the latter are not depreciable property. Taxpayers are challenging IRS’s position in at least one pending court case because, among other reasons, they believe the agency’s position is inconsistent with both Treasury regulations under section 174, which allow the deductibility of expenditures for pilot models and the legislative history of section 41, which, they say, implies that such expenditures could qualify for the credit. IRS says that some taxpayers have labeled custom-designed property intended to be held for sale in their ordinary course of business as prototypes, solely for the purpose of claiming the research credit. Consequently, IRS considers the costs associated with the manufacture of such products to be “inventory costs” and not QREs. Both taxpayers and IRS examiners would like to see clearer guidance in this area and Treasury has a project to provide further guidance under section 174 in its most recent priority guidance plan. IRS has also been concerned with the extent to which taxpayers have attempted to recharacterize ineligible foreign research services contracts as supply purchases. For taxpayers claiming the regular research credit the definition of gross receipts is important in calculating the base amount to which their current-year qualified research expenses (QRE) are compared. The definition also was critical for determining the amount of credit that taxpayers could earn with the alternative incremental research credit (AIRC). (Even though this credit option is no longer available, a decision regarding the definition of gross receipts will affect substantial amounts of AIRC claims that remain in contention between taxpayers and the Internal Revenue Service (IRS) for taxable years before 2009.) Gross receipts do not enter into the computation of the alternative simplified credit (ASC) or the basic research credit. The House Budget Report accompanying the Omnibus Budget Reconciliation Act of 1989 that introduced the current form of the regular credit provided two rationales for indexing a taxpayer’s base spending amount to the growth in its gross receipts: 1. Businesses often determine their research budgets as a fixed percentage of their gross receipts; therefore, the revised computation of the base amount would better achieve the intended objective of approximating the amount of research the taxpayer would have done in any case. 2. Indexing the base to gross receipts would effectively index the credit for inflation. 3. Neither the House, Senate, nor Conference reports accompanying the Small Business Job Protection Act of 1996 provided any rationale for the design of the AIRC. Neither the statute nor the legislative histories for either of these Acts defined the term gross receipts in detail. Section 41(c)(7) of the IRC simply provides that, for purposes of the credit, gross receipts for any taxable year are reduced by returns and allowances made during the tax year, and, in the case of a foreign corporation, that only gross receipts effectively connected with the conduct of a trade or business within the United States, Puerto Rico, or any U.S. possession are taken into account. Department of the Treasury regulations for the credit generally define gross receipts as the total amount, as determined under the taxpayer’s method of accounting, derived by a taxpayer from all its activities and all sources. However, “in recognition of the fact that certain extraordinary gross receipts might not be taken into account when a business determines its research budget,” the regulations provide, among other things, that certain extraordinary items (such as receipts from the sale or exchange of capital assets) are excluded from the computation of gross receipts. The principal issue of contention between taxpayers and IRS is the extent to which sales and other types of payments among members of a controlled group of corporations should be included in that group’s gross receipts for purposes of computing the credit. Neither the IRC nor Treasury regulations are clear on this point and IRS has issued differing legal analyses in specific cases over the years. Several of the tax practitioners that we interviewed emphasized the importance of this issue, particularly as a consequence of the extraordinary repatriation of dividends in response to the temporary incentives under section 965. One noted that it is the most significant Fin 48 issue for them. Others noted that it is a $100 million issue for some taxpayers and will determine whether other taxpayers will earn any credit or not in given years. Uncertainty surrounding the definition of gross receipts makes it difficult for some regular credit users to know how much credit they would receive for spending more on research and, thereby, reduces the effectiveness of the credit. Several private sector commentators and tax professionals we interviewed have taken the position that all transfers within a controlled group of corporations, including those between foreign subsidiaries and U.S. parent corporations should be excluded from gross receipts. In 2002 IRS issued a Chief Counsel Advice memorandum that supported this interpretation on behalf of a particular taxpayer, noting that the decision was based on the particular facts and circumstances of the case and should not be cited as precedent for other cases. A subsequent, 2006, IRS Chief Counsel Memorandum came to the opposite conclusion, again based on the specific facts and circumstances of the case. The uncertainty for taxpayers results from the fact that neither memorandum identified which particular circumstances in each case were decisive and the descriptions provided of each case were very similar. Moreover, the two IRS memorandums applied differing interpretations of congressional intent. The critical disagreement between IRS and the taxpayer representatives is whether the disregarding of intragroup transfers under the group credit rules applies to gross receipts as well as to qualified research expenses. The current position taken by IRS is that the credit regulations section stating that transfers between members of a controlled group are generally disregarded is that it applies only to QREs and not to gross receipts because those rules were in place prior to 1989, when gross receipts first became a factor in the computation of the credit, and neither Congress (with respect to the Internal Revenue Code (IRC)) nor Treasury (with respect to its regulations) modified the rules to specifically indicate that they apply to gross receipts. Some tax professionals counter this reasoning by saying that the specific language in the IRC states that the rules apply for purposes of “determining the amount of the credit”; consequently, there was no need for Congress to explicitly link the rules to gross receipts because the latter obviously play a critical role in determining the amount of the credit. Treasury has yet to address the treatment of gross receipts under the group credit rules, even though the issue has been in Treasury’s priority guidance plans since 2004. A Treasury official told us that one issue the department would need to decide, even if they accept that Congress intended for the rules to apply to gross receipts, is whether Congress intended such a broad exclusion or, instead, wanted to generally exclude intragroup transactions, except for sales by a domestic member to a foreign affiliate that are subsequently passed through as sales to foreign third parties. Changing the scope of gross receipts would not affect the amount of regular credit earned by a regular credit user (and, therefore, the revenue cost) if the relative sizes of the various components of that taxpayer’s gross receipts remained the same as they were during the base period. For example, if dividends from foreign members accounted for 10 percent of the group’s gross receipts during the base period and 10 percent of the gross receipts over the past four years, then the taxpayer’s regular credit would be the same regardless of whether such dividends were counted in gross receipts. However, if the share of such dividends in gross receipts had grown over time, the taxpayer’s credit would be smaller if those dividends were included in the definition of gross receipts than if they were excluded. Conversely, if the dividend share declined over time the inclusion of the dividends in gross receipts would give the taxpayer a larger credit. The effect that changes in the scope of gross receipts would have on the marginal incentive that the regular credit provides to a particular taxpayer would depend on whether the changes affect the credit constraints that the taxpayer faces. Specifically, the inclusion of a component that has increased its relative share since the base period would eliminate the marginal incentive for a taxpayer who had been able to earn the credit if the inclusion caused that taxpayer’s base amount to exceed current-year QREs; the inclusion of a component that has increased its relative share would increase the marginal incentive if it increased the taxpayer’s base amount from being less than half of its current-year QREs to more than half (because this would remove the taxpayer from being subject to the 50-percent base constraint); The inclusion of a component that has decreased its relative share since the base period would have effects opposite to those described in the first two bullets; and if any potential component of gross receipts accounts for the same proportion of the taxpayer’s total gross receipts in the base period and over the last 4 years, then the marginal incentive would not be affected by the inclusion or exclusion of that component. The broader the definition of gross receipts, the less credit taxpayers would earn under the AIRC (for a given set of credit rates). This would reduce the revenue cost of the AIRC and it may reduce the marginal incentive provided to some taxpayers, depending on where their resultant ratio of QREs to gross receipts leaves them in the credit’s graduated rate structure. Unless Congress reverses its decision and reinstitutes the AIRC for tax years after 2008, the amount of research spending will not be affected by any reduction in that credit’s marginal incentive resulting from a broader interpretation of gross receipts. Under this option, gross receipts would consist of all payments received from parties outside of the group by any member of the group that are derived from the member’s trade or business within the United States, except for those extraordinary items currently excluded by Treasury regulations. Sales of products by a U.S. member to a foreign member that are subsequently sold to a foreign third party would be excluded, as would be any dividend or royalty payments that are derived from such sales. Any amounts that a foreign member receives from third parties that are derived from that member’s trade or business within the United States would be included in the group’s total gross receipts on a current basis (not just when such amounts are repatriated to the United States). Also, any sales that a domestic member makes to third parties within the United States of products imported from a foreign member (even when the latter has no trade or business within the United States) would be included in the group’s gross receipts. If Section 41(c)(7) of the IRC reflects an expectation by Congress that taxpayers would not fund research within the United States out of sales made by foreign members, this option would meet that expectation. It would be consistent with the view that foreign members should be allowed to use their resources for the research they perform abroad and, given that the foreign research does not qualify for the credit, the foreign resources should not enter into the credit computation either. In addition, this option would provide symmetry between the treatment of sales by U.S. members of products imported from foreign affiliates and sales by foreign members of products that they purchase from U.S. members. However, this option would provide disparate treatment between foreign sales that a U.S. member makes directly to a foreign third party (which would be included in the group’s gross receipts) and foreign sales that a U.S. member passes through a foreign member (which would be excluded). This disparate treatment would give regular credit users some incentive to pass their sales through foreign members rather than to sell directly to foreign third parties. It also would provide some advantage for regular credit users to manufacture and sell products overseas, rather than to manufacture them in the United States and sell them directly to third parties overseas; however, it would not give those users any advantage to manufacture overseas, rather than to manufacture in the United States and pass their sales through foreign members. It is not clear that any of these incentive effects that would result from this option would be significant relative to the many other tax and nontax factors that businesses consider when deciding where to locate their activities and how to route products and transfers through their affiliates. Perhaps most importantly, this option could exclude a substantial amount of export sales of U.S. multinational corporations from gross receipts. This result would favor regular credit users whose export sales have increased as a share of their total sales and disfavor users whose export shares have declined. It would also provide more generous AIRC benefits to users that export relatively large shares of their products than to users whose export shares are smaller. These disparities in the credit benefits across taxpayers serve no useful purpose. This option, which would be consistent with IRS’s current interpretation that the aggregation rules for computing the group credit apply only to QREs and not to gross receipts, appears to be inconsistent with Congress’s intent of using the ratio of QREs to gross receipts as a measure of a taxpayer’s research effort in the base period and in the current year. This option would eliminate any double-counting of QREs but would overstate the resources available to the group by double-counting sales and income payments between group members. One consequence of this approach would be to encourage regular credit users to reduce the volume of intragroup transfers as a share of total gross receipts relative to what that share was during the base period. Distorting business practices in this manner would serve no purpose and could reduce efficiency. For AIRC users this option would reduce the amount of credit they could earn and would put taxpayers with relatively high volumes of intragroup transactions at an unjustified disadvantage. This option is preferable to option 1 because it would not discriminate among taxpayers on the basis of whether they export their products or sell them domestically because it would include all sales that are effectively connected with the conduct of a trade or business within the United States in a group’s gross receipts. This option is preferable to option 2 because it would eliminate any double-counting of intragroup transfers in gross receipts, which is important if Congress wishes to continue using gross receipts as a measure of the resources available to corporations. Relative to option 1, this option would give corporate groups that use the regular credit some incentive to produce goods abroad that they intend to sell abroad, rather than produce them in the United States; however, it is not clear that this incentive is significant relative to other factors that influence the location of production. Option 3 would be less costly than option 1 and more costly than option 2 in terms of historic claims by users of the AIRC. In terms of future claims by users of the regular credit, the relative costs of the three options are difficult to determine because they depend on how the proportionate shares of certain types of intragroup transfers in the future will compare to what they were during taxpayer’s base periods. Substantiating the validity of a research credit claim is a demanding task for both taxpayers and the Internal Revenue Service (IRS), particularly in cases where research is not a primary function of the business in question. Two factors have led to a considerable degree of controversy between IRS and taxpayers over the types of evidence that are sufficient to support a claim for the credit: Most taxpayers do not maintain project-based accounts for normal business purposes (and even those that do must collect additional details solely for purposes of claiming the credit), There has been an increase in the number of taxpayers filing claims on amended returns, based on studies prepared by consultants, and There is no specific guidance in law, regulations, or from IRS examiners as to what constitutes sufficient substantiation. Neither the Internal Revenue Coder (IRC) nor Department of the Treasury regulations contain specific recordkeeping requirements for claimants of the research credit. However, claimants are subject to the general recordkeeping rules of the IRC and Treasury regulations, applicable to all taxpayers, that require them to keep books of account or records that are sufficient to establish the amount of credit they are claiming. In the case of the research credit, a taxpayer must provide evidence that all of the expenses for which the credit is claimed were devoted to qualified research activities, as defined under IRC section 41. Under that section the qualification of research activities are determined separately with respect to each business component (e.g., a product, process, or formula), which means that the taxpayer must be able to allocate all of its qualified expenses to specific business components. Moreover, the taxpayer must be able to establish these qualifications and connections to specific components not only for the year in which the credit is being claimed, but also for all of the years in its base period. The tax practitioners we interviewed recognize that a nexus needs to be shown between expenses and business components or projects; however, they noted that documenting this connection requires considerable effort for businesses that use cost center accounting, rather than project accounting to track their expenses. Standard business accounting typically focuses on the financial status of organizational units, such as geographical or functional departments. Large businesses often have cost centers, which are separately identified units (such as research, engineering, manufacturing and marketing departments) in which costs can be segregated and the manager of the center is responsible for all of its expenses. Project accounting is the practice of creating reports that track the financial status of specific projects, the cost of which are often incurred across multiple organizational units. Practitioners that work with both large multinational corporations and small family-owned businesses told us that most of their clients claiming the research credit do not use project accounting. Project accounting is typically used by government contractors, which are usually required to account for their costs on a contract-by-contract basis, and in certain industries, such as pharmaceuticals and software development. However, even those firms that use project accounting need to collect additional details that are required only for purposes of claiming the credit. Consequently, many firms rely on third-party consultants (with expertise in the complexities of research credit rules) to conduct studies that bridge their cost-center accounting of research expenditures to project-based accounting that is acceptable to IRS. IRS and practitioners often refer to this attempt to bridge the two accounting approaches as the “hybrid” approach. A key component of the documentation needed to support a credit claim, regardless of which accounting approach a taxpayer uses, is the allocation of wage expenses between qualifying and nonqualifying activities. In the case of a taxpayer using project accounting, those accounts make it easier to demonstrate that an employee worked on a project to develop a new or improved business component; however, even then, additional support is needed to show how much of the employee’s time was spent on activities that qualify as a process of experimentation intended to eliminate uncertainty (or on a qualifying support activity). In the case of a taxpayer using cost-center accounting, documentation also needs to be generated to show the amount of wages devoted to each qualifying project. Wage allocations made by consultants are typically based on after-the-fact surveys or interviews of managers who are asked to estimate the percent time that their employees spent on different projects and activities. In addition, subject matter experts (SME), such as a firm’s managers, scientists and engineers, are often interviewed to gain explanations of how particular activities meet the standards of qualifying research. Some of the consultants also told us they also try to gather whatever relevant technical documentation may exist to support this testimonial evidence. In the case of large corporations with numerous research projects detailed allocation estimates may be made for only a representative sample of projects and then extrapolated across the population of all projects. There were wide difference in opinions between the IRS examiners and the tax practitioners we interviewed regarding what methods are acceptable for allocating wages between qualifying and nonqualifying activities. Practitioners noted that IRS used to accept cost center or hybrid accounting in the absence of project accounting; however, in recent years IRS has been much less willing to accept claims based on the first two approaches. They also said that IRS examiners now regularly require contemporaneous documentation of qualified research expenses (QRE), even though this requirement was dropped from the credit regulations in 2001. Some practitioners suggested that the changes in IRS’s practices came about because examiners were having difficulty determining how much QREs to disallow in audits when they found that a particular activity did not qualify. Others said that IRS does not want to devote the considerable amounts of labor required to review the hybrid documentation. The IRS officials we interviewed said that many more taxpayers have or had project accounting than was suggested by the tax practitioners. The officials said that the consultants ignored these accounts because they boxed them in (in terms of identifying qualified research expenses). They noted that, before the surge in new claims by firms that had never claimed the credit previously, taxpayers used to supply more documentary evidence, such as budgets and e-mails. In their view the use of high level surveys and uncorroborated testimony of SMEs are not a sufficient basis for identifying QREs. The officials noted that sometimes consultants conduct interviews for one tax year and then extrapolate their results to support credit claims for multiple earlier tax years In their view, these are the types of claims that the new penalty on erroneous claims will combat. These officials would also like to see a new line item added to tax returns on which taxpayers would be required to show the amount of the research deduction they were claiming under IRC section 174. They would like to make taxpayers go on record as having considered the expenses to be research when they first incurred them, rather than after the fact on an amended return. A common complaint among the practitioners we interviewed is that IRS examiners routinely reject their credit studies but will not also say what would be acceptable, short of contemporaneous project-based accounts. They also say that IRS mixes up a taxpayer’s requirement to keep records and what is required to substantiate credit claims. The taxpayers do have records of all their expenses, but not of which ones are tied to qualified activities. Supplemental records and narratives are needed to explain how the expenses qualify. The practitioners said that it is unreasonable to expect that many businesses will maintain contemporaneous records of how much time each of their employees spends on qualified activities simply for purposes of claiming the credit; therefore, after-the-fact estimated allocations should be allowed. Some observed that when Congress renewed the credit in 1999 it expressed concern about unnecessary and costly taxpayer recordkeeping burdens and reaffirmed that “eligibility for the credit is not intended to be contingent on meeting unreasonable recordkeeping requirements.” They also note that recent court decisions have allowed the research tax credit in the absence of contemporaneous allocations when the evidence provided by the taxpayer has been convincing, which courts have cited in two recently decided research tax credit cases. IRS officials told us that their current practices are consistent with these recent decisions, which, they emphasize, require estimates to have a credible evidentiary basis. The key issue is not the contemporaneity of the evidence, but its quality (e.g., time survey estimates made by employees who actually performed or supervised the research, rather than estimates made by someone in the firm’s tax department who had no first-hand knowledge of the research). Some practitioners doubted the usefulness of specific recordkeeping guidelines, given the wide range of practice across industries. Others would greatly welcome additional guidance and thought that the separate audit technique guides that IRS developed for the pharmaceuticals and aerospace industries, which several practitioners commended, could serve as models. IRS officials say that they do not require project-based accounting records and they disagree with taxpayer assertions that they routinely deny credit claims for lack of such accounting or lack of contemporaneous records. Examiners consider these types of records to be the most reliable and relevant form of substantiation; however, in the absence of project-based accounts, the examiners are instructed to consider and verify all credible evidence. The officials note that two audit technique guides (ATG) they have published—one (issued in June 2005) covering research credit issues in general and the other (issued in May 2008) covering issues relating to amended claims—provide general descriptions of necessary documentation and lists specific types of documentation that would be acceptable for addressing particular issues. The latter states that IRS does not have to accept either estimates or extrapolations because IRC section 6001 requires taxpayers to keep records to support their claims. It instructs examiners to consider the extent to which taxpayers rely on oral testimony and/or estimations, rather than documentation, when deciding whether to reject a claim and that information to support the claim should be contemporaneous. Examiners are also directed to consider whether oral testimony was from employees who actually performed the qualified research and how much time elapsed between the research and the testimony. To enable examiners to make such determinations without having to go through often voluminous amounts of documentation, IRS is now requiring examiners to issue a standardized information document request (IDR) questionnaire to all taxpayers with amended claims for the research tax credit that are in the early stages of examination. This IDR asks taxpayers for complete answers (not just references to other documentation) to questions concerning key aspects of the support for their credit claims. For example, the IDR asks what percentage of QREs are base on oral testimony or employee surveys, who was interviewed or surveyed, and how much time elapsed between the claim year and the time of the interview or survey. If some of the support for the answers is contained in other records, the taxpayer must supply specific location references. The ATG advises examiners that, in some cases they can use the responses to the IDR alone to determine that the amount claimed is not adequately supported and should be disallowed without further examination. The IRS officials we interviewed pointed to the research credit recordkeeping agreements (RCRA) as examples of the recordkeeping that they would accept and some practitioners said that IRS could use the knowledge it gained through RCRAs about industry-specific record keeping practices to develop more industry-specific recordkeeping guidance. The officials said a contemporaneous allocation was not an absolute requirement, but timeliness is a major factor in improving the credibility of any evidence. In designating research credit claims (i.e., claims made after the initial filing of a tax return) as a Tier I compliance issue, IRS noted that a growing number of the credit claims were based on marketed tax products supported by studies prepared by the major accounting and boutique firms. It further noted that these studies were typically marketed on a contingency fee basis and exhibited one or more of the following characteristics: high-level estimates; biased judgment samples; lack of nexus between the business component and QREs; and inadequate contemporaneous documentation. IRS’s concern is focused on credit claims that were not taken into account on a taxpayer’s original return and the Tier I coverage is limited to that type of claim. Most of these claims are made on amended returns, which generally must be filed within 3 years after the date the corporation filed its original return or within 2 years after the date the corporation paid the tax (if filing a claim for a refund), whichever is later. The period may be longer for taxpayers that file for extensions. IRS officials have noted a particular concern with new or expanded credit claims that can be made for tax years up to 20 years earlier than the current tax year, provided that the taxpayer still has unused tax credits or net operating loss carryforwards from that earlier year. These long- delayed credit changes are especially troublesome for IRS examiners because many taxpayers do not file an amended Form 6765 or specifically indicate anywhere on their current year returns that they have changed the amounts of credit claimed for earlier years. Consequently, the adjusted claims are not likely to be detected unless IRS is already auditing the taxpayer’s current return. IRS officials said that this practice has gone from seldom to quite often in recent years and is being used by both large and mid-size firms. IRS officials expressed concern that when taxpayers do submit amendments to their Forms 6765, they often do so late in an audit after IRS has already spent significant time reviewing the initial claims. In many cases the taxpayers settle for 50 cents on the dollar as soon as IRS challenges a claim. In other cases, taxpayers make claims based on studies that consultants have sold to them on a contingency-fee basis. Treasury Circular No. 230 now prohibits those who practice before IRS from collecting contingency fees for these types of studies; however, some studies may be prepared by consultants who do not practice before IRS. IRS officials said one reason that led the agency to designate the credit as a Tier 1 issue was to push taxpayers to make better initial credit claims before IRS spends substantial time on audits. As a result of the Tier I designation, the research credit has been assigned an issue management team to ensure that the issue is fully developed with appropriate direction and a compliance resolution strategy. Three requirements that currently form part of this strategy are that: all claims for the credit that are not made on or before the due date of the taxpayer’s Form 1120 for a given tax year must be filed at IRS’s Ogden Service Center; examiners must issue a standardized information document request (IDR) to taxpayers at the outset of all new examinations of the credit; and in all cases where any amount of a research credit claim is disallowed by IRS, the examiners must determine whether the recently enacted penalty for filing erroneous claims for refund or credit should be applied. The examiners must obtain and document the concurrence of a technical advisor in all such cases where they decide not to impose the penalty. Although most of the tax practitioners we interviewed acknowledged that there was a proliferation of aggressive and sometimes sloppy research credit claims, they pointed to many legitimate reasons for companies to file claims on amended returns, including the following: Substantiating and documenting research expenses in a manner that is acceptable to IRS is time consuming and labor intensive, making it difficult to file for the credit on a timely basis on an original return. The firms’ tax preparers need the assistance of the firms’ scientists, engineers, and technicians, who cannot be made available in time for a current-year filing. Pulling these technical experts away from their research represents a significant financial burden for taxpayers. Consequently, when taxpayers go through this effort it makes sense for them to cover multiple tax years at a time on amended returns. The prevalence of amended returns in recent years also can be attributed to long-standing uncertainties in credit regulations. The definition of qualified research expenses was only resolved in final regulations in 2003, and the “discovery test” was also abandoned in the final regulations by Treasury and IRS. This clarification of the rules prompted taxpayers to file claims for the credit for past tax years on amended returns. Similarly, changes in regulations relating to the definition of gross receipts also prompted many taxpayers to file amended claims. Start-up companies often don’t consider it worthwhile to file credit claims until they turn profitable. Once they decide to make the effort, they also submit their claims for earlier years through amended returns. The long-term nature of research projects is another reason why taxpayers submit claims on amended returns. Taxpayers must often know the end result of a process/project to establish the eligibility of research expenses as part of a “process of experimentation,” which is part of the statutory definition of qualified expenses. Many firms, large and small, don’t realize that they actually do things that qualify for the credit. Once outside consultants make them aware of this fact, it makes sense for them to want to go back and claim the credit for earlier years as well. Many large practitioners we interviewed said that aggressive and poorly documented research credit claims are largely generated by “boutique” research credit consultants who aggressively market their services. The larger practitioners feel that IRS has taken things too far by presuming that all amended claims are abusive. They said the larger accounting firms are governed by strict professional standards and the new penalties will not have much effect on their behavior, but the penalties should help to reduce abuses by the boutique firms. Practitioners did express concern that the new penalties would make the audit and appeals processes even more contentious and they questioned the appropriateness of imposing penalties in areas where Treasury guidance is limited and problematic. The only practitioner we interviewed that had actual experience with the new penalties said that the penalties were typically applied in all cases where claims were reduced; however, after taxpayers had spent the time and money to make legal cases against them, all of the penalties were rescinded. The IRS officials we interviewed expressed strong disagreement with the view of the large accounting firms that the abusive amended returns problem is primarily a “boutique” practitioner problem. They said that you can see any problematic practice at any level of practitioner. However, the officials did note that the use of the credit has expanded downward in terms of the size of the claimants and that the expansion has been driven by the growth of boutique research credit consultant shops. All of the difficulties that taxpayers face in substantiating their QREs are magnified when it comes to substantiating QREs for the historical base period (1984 through 1988) of the regular credit. Taxpayers are required to use the same definitions of qualified research and gross receipts for both their base period and their current-year spending and receipts. However, given the fact that few firms have good (if any) expenditure records dating back to the early 1980s base period, most firms are unable to precisely adjust their base period records for the changes in definitions promulgated in subsequent regulations and rulings. Taxpayers also have great difficulty adjusting base period amounts to reflect the disposition or acquisition of research-performing entities within their tax consolidated groups. Some practitioners would like to see some flexibility on IRS’s part in terms of the use of estimates and employee testimony to substantiate QREs in accordance with the Cohan rule; other practitioners simply suggested doing away with the regular credit. They believe that some taxpayers will choose to use the new ASC simply to avoid the burden of base period documentation. One IRS official noted that IRS is not likely to challenge a taxpayer’s base amount if the latter uses the maximum fixed base percentage; however, he did not think that IRS would have the authority to say that taxpayers could take that approach without showing any records at all for the base period. Neither the IRS nor Treasury officials we interviewed saw any administrative problems arising if the IRC were changed to relieve taxpayers of the requirement to maintain base period records if they used the maximum fixed base percentage. Our analysis of taxpayer data from SOI for 2005 suggests that about 25 percent of all regular credit users had fixed base percentages of 16 percent or were subject to the minimum base constraint and would remain subject to that constraint even if they elected to use a fixed base percentage of 16 percent. Many taxpayers use statistical sampling to estimate their QREs and IRS frequently uses sampling when auditing taxpayer’s records supporting research credit claims. Several practitioners we interviewed had specific concerns with IRS’s guidance and audit practices relating to sampling; however, some noted that they have seen improvements in recent months. The practitioners’ biggest concern is that, unless taxpayers can achieve a 10 percent relative precision in their estimates, IRS makes them use the lower limit of the confidence interval for their estimates of QREs, which is the least advantageous to the taxpayer. Practitioners say this standard is too difficult to meet, even in cases where taxpayers use large samples, and that IRS should have a less demanding threshold for allowing taxpayers to use point estimates. Moreover, they objected to IRS’s requirement that they exclude the “certainty stratum” when calculating relative precision, which they considered to be just bad statistics. IRS officials responded that having a precision threshold encourages taxpayers to do a quality sample and that 10 percent precision is a good indicator of a high quality sample. They said that without some control standards taxpayers could try to make do with very small samples. The officials also noted that there are methods other than increasing sample sizes, such as improving sample design, population definition and stratification techniques, by which taxpayers can reduce their sampling errors. With respect to the exclusion of the certainty stratum, IRS acknowledged that this requirement was not justified on statistical grounds; however, they believe it is needed to prevent potential abuses. They are concerned that taxpayers would include extraneous accounts in their 100 percent stratum for the sole purpose of reducing their relative precision. IRS officials said that they are in the final stages of releasing guidance on sampling that addresses practitioners’ concerns regarding the certainty stratum and the 10-percent precision test. Practitioners also expressed concerns that IRS was hardening its position against accepting multi-year samples. They said it is more cost-effective to take one sample that covers multiple years and has a reasonable overall accuracy for the entire time period than to take several single-year samples that have narrow confidence intervals each year. IRS acknowledged that the practitioners’ point was correct from a statistical point of view; however, they noted that, given the incremental nature of the credit, that it is important for estimates of QREs to be accurate for each specific year, not just over the multi-year period as a whole. In addition, IRS does not want to encourage taxpayers to hold off filing their claims for several years and then do a multi-year sample. Practitioners and taxpayer representatives differed on the usefulness of IRS’s RCRA and prefiling agreement (PFA) programs. The RCRA program was a pilot effort intended to let IRS develop and evaluate procedures that would reduce costs for both taxpayers and IRS by resolving issues concerning the type and amount of documents that a taxpayer must maintain and produce to support research credit claims. Taxpayers that complied with the terms of the agreements worked out with IRS are deemed to have satisfied their recordkeeping requirements for the tax years covered by the agreement. Five taxpayers participated in the pilot program. The PFA program is an ongoing effort by IRS designed to permit taxpayers, before filing their returns, to resolve the treatment of an issue that would likely be disputed in an examination. Some of the practitioners had had good experiences with PFAs for particular clients, but they noted that the $50 thousand fee was too expensive and that IRS has been less willing to enter into PFAs because it did not have sufficient staff resources. Other practitioners said that RCRAs and PFAs are not likely to be much help, given the animosity and distrust between taxpayers and IRS. They think that IRS is asking for too much in these agreements. One noted that it had five recent experiences with PFAs and all of them were bad, so it no longer recommends them to clients. In the current environment taxpayers are unwilling to invite IRS in for a look at their records and taxpayers do not believe that an RCRA ensures that IRS will not ask for additional documents during an exam. In addition, the practitioners said that RCRAs are unlikely to be helpful in the long-term, given the variable nature of research projects. Agreements made in an RCRA may not be applicable to other research projects in future tax years, or even the same project in future tax years as the project evolves. When Congress originally enacted the research credit in 1981 it included rules “intended to prevent artificial increases in research expenditures by shifting expenditures among commonly controlled or otherwise related persons.” Without such rules a corporate group might shift current research expenditures away from members that would not be able to earn the credit due to their high base expenditures to members with lower base expenditures. A group could, thereby, increase the amount of credit it earned without actually increasing its research spending in the aggregate. Department of the Treasury and Internal Revenue Service (IRS) officials told us that the rules also guard against manipulation within a group that would shift credits from members with tax losses to those with tax liabilities. Under the Internal Revenue Code (IRC), for purposes of determining the amount of the research credit, the qualified expenses of the same controlled groups of corporations are aggregated together. The language of the relevant subsection specifically states that: A. all members of the same controlled group of corporations shall be treated as a single taxpayer, and B. the credit (if any) allowable under this section to each such member shall be its proportionate share of the qualified research expenses and basic research payments giving rise to the credit. Congress directed that Treasury regulations drafted to implement these aggregation rules be consistent with these stated principles. Under current Treasury regulations the controlled group of corporations must, first, compute a “group credit” by applying all of the credit computational rules on an aggregate basis. The group must then allocate the group credit amount among members of the controlled group in proportion to each member’s “stand-alone entity credit” (as long as the group credit amount does not exceed the sum of the stand-alone entity credits of all members). If the group credit does exceed the sum of the stand-alone credits, then the excess amount is allocated among the members in proportion to their share of the group’s aggregate qualified research expenses (QRE). The stand-alone entity credit means the research credit (if any) that would be allowed to each group member if the group credit rules did not apply. Each member must compute its stand- alone credit according to whichever method provides it the largest credit for that year without regard to the method used to compute the group credit. The group credit may be computed using either the rules for the regular credit or the rules for the alternative simplified credit (ASC) (or, until the end of tax year 2008, the rules for the alternative incremental research credit (AIRC)). The group credit computation is the same for all members of the group. For purposes of the initial allocation of the group credit among members that file their own federal income tax returns, consolidated groups of corporations are treated as single members. However, once a consolidated member receives its allocation of the group credit, that allocation must be further allocated among the individual members of the consolidated group in a manner similar to the one used for the initial allocation. Although some private sector research credit consultants told us that the group credit rules do not affect large numbers of taxpayers, several others said that the opposite was true with one pointing out that the rules affect all groups that have any of the following: members that are between 50 percent and 80 percent owned; noncorporate members; members departing in a given year; or U.S. subsidiaries that are owned by foreign parents and are members of different U.S. consolidated groups. One consultant that works primarily with mid-sized businesses, including many S corporations, noted that such corporations are heavily affected by these rules. A second consultant that also works primarily with S corporations said that between 10 and 15 percent of their clients are affected by these rules. The consultants with whom we discussed this issue agreed that the rules were very burdensome for those groups that are affected. Some very large corporate groups must do these computations for all of their subsidiaries, which could number in the hundreds, and they have no affect on the total credit that a group earns. None of these affected groups can benefit from the simplified recordkeeping that the ASC offers to other taxpayers because they must be able to show which stand-alone credit method provides the highest credit for each member, which can only be done by computing the credit under both the ASC and regular credit rules (and AIRC rules in the years for which it was available) for each member. Some consultants expressed concern that IRS could reject credit claims completely even if the only deficiency is in the allocation computation. The primary objection that taxpayer representatives have raised with respect to the group credit regulations is that all affected groups are required to use the same burdensome allocation procedures even though there is no clear basis for them in the IRC, which they say only requires that the allocation be in proportion to the QREs “giving rise to the credit.” Some commentators contend that the stand-alone credit method does not satisfy the principle set out in the IRC any better than would a simpler allocation based on each member’s share of current QREs. If a group, as a whole, is above its base spending amount, then an additional dollar of spending by any group member will increase the group credit by the same amount, regardless of how the group credit total is allocated among members. Some would say, in this sense, all QREs give rise to the credit to the same extent. Several public commentators and consultants we interviewed recommend that groups be allowed to allocate their group credits by any reasonable means, as long as the sum of the credits that each member receives does not exceed the group credit amount. Treasury maintains that a single, prescribed method is necessary to ensure the group’s members collectively do not claim more than 100 percent of the group credit. An official explained that if two members of a group each used a different method that maximized their share of the group credit, this would result in the members claiming in aggregate more than the group credit amount. If taxpayers could use any reasonable method of allocation and group members used different methods, then IRS would have no basis for saying whose individual credit had to be reduced in order that the aggregate claims by members did not exceed the group credit amount. While acknowledging that disagreements within groups are likely to be rare, the official noted a case where representatives of two members of the same group separately argued in favor of differing allocation rules. Treasury also maintains that the stand-alone credit approach is more consistent with Congress’s intent to have an incremental credit than is the gross QRE approach. According to Treasury, the former approach appears to be the only one that would provide each member some incentive to exceed their base spending amount, given that each member may not know the tax positions of other group members (i.e., current-year and base QREs) until the end of the tax year. The individual member may not know the extent to which one more dollar of its own spending will increase the group credit amount, but it does know that by maximizing its stand-alone credit amount, it will maximize its share of whatever amount the group earns as a credit in the aggregate. An IRS official added that requiring everyone to use the stand-alone method would ensure a fairer distribution of the credit within groups. Otherwise, a parent corporation may discriminate in favor of 100-percent owned members and against 50- percent members in the allocation of credits because some of the benefit given to the latter would go to unrelated parties. Allowing controlled groups to use an alternative allocation method could significantly reduce both the compliance burden on the affected groups and IRS’s cost of verifying their compliance. If a controlled group agrees to use the ASC computation for its group credit and allocates that credit among its members on the basis of either each member’s current QREs or each member’s stand-alone ASC, then no member would have to maintain and update records from the base period for the regular credit, nor would IRS have to review those records. Under the current regulations every member’s credit claim would be open to revision if IRS found that any of their base period spending records are deficient. This alternative approach should not impose any other types of costs on IRS beyond what it faces under the current regulations. Under either of these approaches the only way that IRS can confirm that the group credit has not been exceeded is to add up all of the credits claimed by individual members and compare that to the group credit amount. In specifying that controlled groups be treated as single taxpayers for purposes of the credit Congress clearly wanted to ensure that a group, as a whole, exceeded its base spending amount before it could earn the credit. It is not clear that Congress was concerned that each member has an incentive to exceed its own base. For groups in which individual members determine their own research budgets, the allocation rules can affect aggregate group research spending because they affect the incentives that each member faces. Therefore, if one of the allocation methods on average provides higher marginal incentives to individual group members, then applying that method could result in higher overall research spending. However, neither the stand- alone credit allocation method nor the gross QRE allocation method is unequivocally superior in terms of the marginal incentives that they provide to individual members. Each of the two methods performs better than the other in certain situations that are likely to be common among actual taxpayers. Data are not available that would allow us to say whether one of the methods would result in higher overall research spending than the other. For those groups in which the aggregate research spending of all members is determined by group-level management, the only way that the allocation rules can affect the credit’s incentive is if they allow the shifting of credits from members without current tax liabilities to those with tax liabilities. If the group credit is computed according to the method that yields the largest credit, then an additional dollar of spending by any group member will increase the group credit by the same amount, regardless of how the group credit total is allocated among members. However, if group management were able to shift credits from tax loss members to those with positive liabilities, the group would be able to use more of its aggregate credit immediately, rather than carrying it forward to future years. The effect of this type of shifting on the efficiency of the credit should be relatively minor because, when a credit is carried forward, the benefit to the taxpayer and the cost to the government are both discounted to the same degree. In any case, a controlled group’s ability to target credit shares to members with positive tax liabilities should not be greater under the gross QRE allocation method than under the stand-alone credit allocation method. The marginal incentive that a particular member of a controlled group would face under alternative group credit allocation methods depends on multiple factors, including: 1. Which credit method (regular or alternative simplified credit (ASC)) is used to compute the group credit; 2. Which credit method yields the highest stand-alone credit for the 3. What, if any, base constraints apply to whichever credit is used; 4. Whether or not the member is allowed to use its highest stand- 5. How the size of the member’s stand-alone credit compares to its current-year qualified research expenses (QRE); and 6. How the member’s share of the group’s total QREs compares to its share of the sum of all members’ stand-alone credits. In the case where a controlled group uses the regular credit method to compute its group credit and an individual member earns its highest stand- alone credit under the regular credit method and the group credit is less than or equal to the sum of the members’ stand-alone credit, the marginal incentive for that member to spend an additional dollar on research under the current rules (MERSA) can be computed as: MERSA = × (IGC + mrg) - (ISAC / ISUMSAC ) × IGC where ISAC is the member’s initial stand-alone credit before making it’s additional expenditure; ISUMSAC is the initial sum of the stand-alone credits of all group members before the one member spends its additional dollar; IGC is the initial group credit before the member spends the additional dollar; mrm is the applicable marginal rate of credit for the member’s stand-alone credit; and mrg is the applicable marginal rate of credit for the group credit. The italicized part of this formula shows the member’s share of the group credit after spending an additional dollar on research; the unitalicized part of the formula shows the member’s share before the additional expenditure. The difference between the two parts equals the marginal benefit that the member receives for spending the additional dollar. If the group credit exceeds the sum of the stand-alone credits, then the formula for MERSA becomes: MERSA = mrm + × (IGC + mrg – (ISUMSAC + mrm)) – (IQRE / ISUMQRE) × (IGC - ISUMSAC) The first term on the right-hand side of the formula, “mrm,” represents the member’s share of that portion of the group credit that equals the sum of the stand-alone credits. The remainder of the formula shows the member’s share of the excess of the group credit over the sum of the stand-alone credits. The italicized portion of the formula shows the member’s share of the excess portion of the credit after spending an additional dollar on research; the underlined portion shows the member’s share before the additional expenditure. The marginal incentive that this same member would face if the entire group credit were allocated according to each member’s share of the group’s gross QREs (MERQ) can be computed as follows: MERQ = × (IGC + mrg) – (IQRE / ISUMQRE) × IGC where IQRE is the member’s initial QREs before making its additional expenditure; ISUMQRE is the initial sum of the QREs of all group members before the one member spends its additional dollar; and IGC is, again, the initial group credit before the member spends the additional dollar. This formula is the same, regardless of whether IGC is less than, equal to, or greater than ISUMSAC. The computation of MERs for group members when either the group or the member uses the ASC is more complex than in the case of the regular credit because each dollar a firm spends in the current year will affect its current-year credit as well as its credits in the next three years. The MER is the present value sum of these four separate effects. In the case where a controlled group uses the ASC method to compute its group credit and an individual member earns its highest stand-alone credit under the ASC method and the group credit is less than or equal to the sum of the members’ stand-alone credit, the current-year effect when that member spends an additional dollar on research under the current rules can be computed as: CY Effect = × (IGC + mrg) - (ISAC / ISUMSAC ) × IGC, which is similar to the first MERSA formula introduced above, except in this case both mrm and mrg will equal 0.14. The marginal incentive effect in the following year can be computed as: Next Year Effect = [(ISAC1 – (1/6) × mrm) / (ISUMSAC1 – (1/6) × mrm)] × (IGC1 – (1/6) × mrg) – (ISAC1 / ISUMSAC1) × IGC1 The “1” at the end of the variable names indicate that they represent the values for that variable in the first year into the future. The italicized portion of the formula shows how the member’s share of the sum of all group members’ stand-alone credits for the next year would change if the member increased its spending by $1 this year. The underlined portion shows that the member’s spending also reduces next year’s group credit that is allocated among the members. The final unitalicized, nonunderlined portion is the amount of the group credit that the member would have received next year without the additional spending this year. Similar effects would occur in the 2 subsequent years. The net incentive provided to the member is obtained by discounting the three future effects and adding them to the current-year effect. The current-year incentive effect that this same member would face if the entire group credit were allocated according to each member’s share of the group’s gross QREs can be computed as follows: CY Effect = × (IGC + mrg) - (IQRE / ISUMQRE ) × IGC, which is the same as for the regular credit, except for the value of mrg. The effect in the following year would be: Next Year Effect = (IQRE1 / ISUMQRE1) × (IGC1 – (1/6) × mrg) – (IQRE1 / ISUMQRE1) × IGC1. The member’s additional spending this year does not affect its share of the groups total spending next year, but it does increase the base for next year’s group credit and, thereby reduces the amount of credit that gets allocated to members. Again, this latter effect would be repeated in the subsequent 2 years. The formulas for the marginal incentives when the ASC is used and the group credit exceeds the sum of the stand-alones are more complicated than those above and are not needed to make the basic point that there are common situations in which each credit allocation method provides a higher incentive than the other. One can run numerical simulations with the various formulas for MERSA and MERQ to identify common situations in which each allocation method provides a higher marginal incentive to a member than the other method. The cases identified in table 20 are simply broad examples and do not cover all situations in which one or the other allocation methods is superior; however, they are sufficient to demonstrate that each of the allocation methods performs better than the other in different situations that are likely to be common to actual taxpayers. For example, when a member of a group is subject to the 50-percent base constraint, the stand- alone credit method provides that member a larger incentive when the member’s share of the sum of all members’ stand-alone credits is greater than the member’s share of the group’s gross QREs; the gross QREs method provides a greater incentive when the converse is true. In 2004 approximately 75 percent of all regular credit users were subject to the 50 percent minimum base constraint. In addition to the contact named above, James Wozny, Assistant Director, Ardith Spence, Susan Baker, Sara Daleski, Kevin Daly, Mitch Karpman, Donna Miller, Cheryl Peterson, and Steven Ray, made key contributions to this report.
The tax credit for qualified research expenses provides significant subsidies to encourage business investment in research intended to foster innovation and promote long-term economic growth. Generally the credit provides a subsidy for research spending in excess of a base amount but concerns have been raised about its design and administrability. Government Accountability Office (GAO) was asked to describe the credit's use, determine whether it could be redesigned to improve the incentive to do new research, and assess whether recordkeeping and other compliance costs could be reduced. GAO analyzed alternative credit designs using a panel of corporate tax returns and assessed administrability by interviewing Internal Revenue Service (IRS) and taxpayer representatives. Large corporations have dominated the use of the research credit, with 549 corporations with receipts of $1 billion or more claiming over half of the $6 billion of net credit in 2005 (the latest year available). In 2005, the credit reduced the after-tax price of additional qualified research by an estimated 6.4 to 7.3 percent. This percentage measures the incentive intended to stimulate additional research. The incentive to do new research (the marginal incentive) provided by the credit could be improved. Based on analysis of historical data and simulations using the corporate panel, GAO identified significant disparities in the incentives provided to different taxpayers with some taxpayers receiving no credit and others eligible for credits up to 13 percent of their incremental spending. Further, a substantial portion of credit dollars is a windfall for taxpayers, earned for spending they would have done anyway, instead of being used to support potentially beneficial new research. An important cause of this problem is that the base for the regular version of the credit is determined by research spending dating back to the 1980s. Taxpayers now have an "alternative simplified credit" option, but it provides larger windfalls to some taxpayers and lower incentives for new research. Problems with the credit's design could be reduced by eliminating the regular credit and modifying the base of the alternative simplified credit to reduce windfalls. Credit claims have been contentious, with disputes between IRS and taxpayers over what qualifies as research expenses and how to document expenses. Insufficient guidance has led to disputes over the definitions of internal use software, depreciable property, indirect supervision, and the start of commercial production. Also disputed is the documentation needed to support a claim, especially in cases affected by changes in the law years after expenses were recorded. Such disputes leave taxpayers uncertain about the amount of credit to be received, reducing the incentive.
The Randolph-Sheppard Act created a vending facility program in 1936 to provide blind individuals with more job opportunities and to encourage their self-support. The program trains and employs blind individuals to operate vending facilities on federal property. While Randolph-Sheppard is under the authority of the Department of Education, the states participating in this program are primarily responsible for program operations. State licensing agencies, under the auspices of the state vocational rehabilitation programs, operate the programs in each state. Federal law gives blind vendors under the program a priority to operate cafeterias on federal property. Current DOD guidance implementing this priority directs that a state licensing agency be awarded a contract if its contract proposal is in the competitive range. In fiscal year 2006, all of the activities of the Randolph-Sheppard program generated $692.2 million in total gross income and had a total of 2,575 vendors operating in every state except for Wyoming. In 1938 the Wagner-O’Day Act established a program designed to increase employment opportunities for persons who are blind so they could manufacture and sell certain goods to the federal government. In 1971, the Javits-Wagner-O’Day Act amended the program to include people with other severe disabilities and allowed the program to provide services as well as goods. The JWOD Act established the Committee for Purchase, which administers the program. The Committee for Purchase is required by law to designate one or more national nonprofit agencies to facilitate the distribution of federal contracts among qualified local nonprofit agencies. The designated national agencies are the National Industries for the Blind and NISH, which represent local nonprofit agencies employing individuals who are blind or have severe disabilities. These designated national agencies charge fees for the services provided to local nonprofit agencies. Effective on October 1, 2006, the maximum fee is 3.83 percent of the revenue of the contract for the National Industries for the Blind, and 3.75 percent for NISH. The purpose of these fees is to provide operating funds for these two agencies. In fiscal year 2006, more than 600 JWOD nonprofit agencies provided the federal government with goods and services worth about $2.3 billion. The JWOD program provided employment for about 48,000 people who are blind or have severe disabilities. Military dining contracts under the Randolph-Sheppard and JWOD programs provide varying levels of service, ranging from support services to full-food services. Support services include activities such as food preparation and food serving. Full-food service contracts provide for the complete operation of facilities, including day-to-day decision making for the operation of the facility. As of October 17, 2006, DOD had 39 Randolph-Sheppard contracts in 24 different states. These contracts had an annual value of approximately $253 million and were all for full-food services. At the same time, DOD had 53 JWOD contracts valued at $212 million annually. Of these, 39 contracts were for support services and 15 were for full-food service. Figure 1 shows the distribution of Randolph- Sheppard and JWOD contracts with DOD dining facilities across the country. In 1974, amendments to the Randolph-Sheppard Act expanded the scope of the program to include cafeterias on federal property. According to a DOD official, when DOD began turning increasingly to private contractors rather than using its own military staff to fulfill food service functions in the 1990s, state licensing agencies under the Randolph-Sheppard program began to compete for the same full-food services contracts for which JWOD traditionally qualified. This development led to litigation, brought by NISH, over whether the Randolph-Sheppard Act applied to DOD dining facilities. Two decisions by federal appeals courts held that the Randolph- Sheppard Act applied because the term “cafeteria” included DOD dining facilities. The courts also decided that if both programs pursued the full- food service contracts for DOD dining facilities, Randolph-Sheppard had priority. Congress enacted section 848 of the National Defense Authorization Act for Fiscal Year 2006 requiring the key players involved in each program to issue a joint policy statement about how DOD food services contracts were to be allocated between the two programs. In August 2006, DOD, Education, and the Committee for Purchase issued a policy statement that established certain guidelines, including the following: The Randolph-Sheppard program will not seek contracts for dining support services that are on the JWOD procurement list, and Randolph- Sheppard will not seek contracts for operation of a dining facility if the work is currently being performed under the JWOD program; JWOD will not pursue prime contracts for operation of dining facilities at locations where an existing contract was awarded under the Randolph- Sheppard program (commonly known as the “no-poaching” provision). For contracts not covered under the no-poaching provision, the Randolph-Sheppard program may compete for contracts from DOD for full-food services; and the JWOD program will receive contracts for support services. If the needed support services are on the JWOD procurement list, the Randolph-Sheppard contractor is obligated to subcontract for those services from JWOD. In affording a priority to a state licensing agency when contracts are competed and the Randolph-Sheppard Act applies, the price of the state licensing agency’s offer will be considered to be fair and reasonable if it does not exceed the best value offer from other competitors by more than 5 percent or $1 million, whichever is less. Congress enacted the no-poaching provision in section 856 of the National Defense Authorization Act for Fiscal Year 2007. A recent GAO bid protest decision determined that adherence to the other provisions of the policy statement was not mandatory until DOD and the Department of Education change their existing regulations. As of July 2007, neither agency had completed updating its regulations. The Randolph Sheppard and JWOD programs utilize different operating procedures to provide dining services to DOD. For the Randolph-Sheppard program, state licensing agencies act as prime contractors, and train and license blind vendors to operate dining facilities. For the JWOD program, the Committee for Purchase utilizes NISH to act as a central nonprofit agency and match DOD needs for dining services with local nonprofit agencies able to provide the service. JWOD employees generally fill less skilled jobs such as cleaning dining facilities or serving food. Education is responsible for overseeing the Randolph-Sheppard program, but relies on state licensing agencies to place blind vendors as dining facility managers. The Department of Education certifies state licensing agencies and is responsible for ensuring that their procedures are consistent with Randolph-Sheppard regulations. According to our survey, state licensing agencies act as prime contractors on Randolph-Sheppard contracts, meaning that they hold the actual contract with DOD. The state licensing agencies are responsible for training blind vendors to serve as dining facility managers and placing them in facilities as new contracting opportunities become available. According to our survey, the state issues the vendor a license to operate the facility upon the successful completion of the training program. Furthermore, many states said this process often includes both classroom training and on-the-job training at a facility. Figure 2 depicts how the Randolph-Sheppard program is generally structured. Responding to our survey, state licensing agencies reported that all blind vendors have some level of managerial responsibility for each of the 39 Randolph-Sheppard contracts. Specific responsibilities may include managing personnel, coordinating with military officials, budgeting and accounting, and managing inventory. An official representing state licensing agencies likened the vendor’s role to that of an executive and said the vendor is responsible for meeting the needs of his or her military customer. At one facility we visited, the vendor was responsible for general operations, ensuring the quality of food, and helped develop new menu selections. Of the 37 contracts where the state licensing agencies provided information regarding whether the blind vendor visits his or her facility, all stated that their blind vendors visit their facilities, and in most cases are on site every day. Additionally, most state licensing agencies told us that they have an agreement with the blind vendor that lays out the state licensing agency’s expectations of the blind vendor and defines the vendor’s job responsibilities. Most state licensing agencies rely on private food service companies to provide the expertise to help operate dining facilities. According to our survey, 33 of the 39 Randolph-Sheppard contracts relied on a food service company—known as a teaming partner—to provide assistance in operating dining facilities. The survey showed that in many cases, the blind vendor and teaming partner form a joint venture company to operate the facility with the vendor as the head of the company. The teaming partner can provide technical expertise, ongoing training, and often extends the vendor a line of credit and insurance for the operation of the facility. Officials representing state licensing agencies told us that states are often unable to provide these resources, and for large contracts these start-up costs may be beyond the means of the blind vendor and the state licensing agency. According to our survey, the teaming partner may assist the state in negotiating and administering the contract with DOD. Additionally, state licensing agencies told us that they often enter into a teaming agreement that defines the responsibilities of the teaming partner. For 6 of the 39 contracts, the state licensing agencies reported that the blind vendor operates the dining facility without a teaming partner. We visited one of these locations and learned that the vendor has his own business that he uses to operate the facility. This particular vendor had participated in the Randolph-Sheppard program for almost 20 years and operated various other dining facilities. In our survey, state licensing agencies reported that vendors in about half (20 of 39) of the contracts are required to employ individuals who are blind or have other disabilities, while others have self-imposed goals. In other cases there may be no formal hiring requirements, but the state licensing agency encourages the blind vendor to hire individuals with disabilities. Based on survey responses we received for 30 contracts, we calculated that the percentage of persons with disabilities working at Randolph-Sheppard dining facilities ranged from 3 percent to 72 percent, with an average of 18 percent. The Committee for Purchase works with NISH to match DOD’s need for services with nonprofit agencies able to provide food services. For military food service contracts, NISH acts as a central nonprofit agency and administers the program on behalf of the Committee for Purchase. In this role, NISH works with DOD to determine if it has any new requirements for dining services. When it identifies a need, NISH will search for a nonprofit agency that is able to perform the required service. NISH then facilitates negotiations between DOD and the nonprofit agency, and submits a proposal to the Committee for Purchase requesting that the specific service be added to the JWOD procurement list. If the Committee for Purchase approves the addition, DOD is required by the Federal Acquisition Regulation (FAR) to obtain the food service from the entity on the procurement list. In some instances, a private food service company is awarded a military dining facility contract and then subcontracts with a JWOD nonprofit agency to provide either full or support food services. For example, the Marine Corps awarded two regional contracts to Sodexho—a large food service company—to operate its dining facilities on the East and West Coasts. Sodexho is required by its contracts to utilize JWOD nonprofit agencies and uses these nonprofit agencies to provide food services and/or support services at selected Marine Corps bases. Figure 3 depicts the JWOD program structure. Most JWOD employees at military dining facilities perform less skilled jobs as opposed to having managerial roles. At the facilities we visited, we observed that employees with disabilities (both mental and physical) performed tasks such as mopping floors, serving food, and cleaning pots and pans after meals. Officials from NISH said this is generally true at JWOD dining facilities, including facilities where the nonprofit agency provides full-food service. Additionally, we observed—and NISH confirmed—that most supervisors are persons without disabilities. At one facility we visited, for example, the nonprofit supervisor oversees employees with disabilities who are responsible for keeping the facility clean and serving food. The Committee for Purchase requires that agencies associated with NISH perform at least 75 percent of their direct labor hours with people who have severe disabilities. For nonprofit agencies with multiple JWOD contracts, the 75 percent direct labor requirement is based on the total for all of these contracts. Therefore one contract may be less than 75 percent but another contract must be greater than 75 percent in order for the total of these contracts to meet the 75 percent requirement. NISH is responsible for ensuring that nonprofit agencies comply with this requirement, and we previously reported that it performs site visits to all local nonprofit agencies every three years, in order to ensure compliance with relevant JWOD regulations. At the three JWOD facilities we visited, officials reported that the actual percentage of disabled individuals employed was 80 percent or higher. Table 1 provides a comparison of the Randolph-Sheppard and JWOD programs’ operating procedures. The Randolph-Sheppard and JWOD programs have significant differences in terms of how contracts are awarded and priced, and in the compensation provided to beneficiaries who are blind or have other disabilities. Under the Randolph-Sheppard program, federal law provides for priority for blind vendors and state licensing agencies in the operation of a cafeteria. This priority may come into play when contracts are awarded either by direct noncompetitive negotiations or through competition with other food service companies. Regardless of how the contract is awarded, the prices are negotiated between the state licensing agency and DOD. Under the JWOD program, competition is not a factor because DOD is required to purchase food services from a list maintained by the Committee for Purchase. Contracts are awarded at fair market prices established by the Committee for Purchase. The two programs also differ in terms of how program beneficiaries are compensated. Under the Randolph-Sheppard program, blind vendors generally receive a share of the profits, while JWOD beneficiaries receive hourly wages and fringe benefits under federal law or any applicable collective bargaining agreement. Randolph-Sheppard blind vendors received, on the average, pretax compensation of about $276,500 annually, while JWOD workers at the three sites visited earned on average $13.15 per hour, including fringe benefits. Although contracts for food services awarded under the Randolph- Sheppard and JWOD programs use the terms and conditions generally required for contracts by the FAR, the procedures for awarding and pricing contracts under the two programs differ considerably. Under the Randolph-Sheppard program, Education’s regulations provide for giving priority to blind vendors in the operation of cafeterias on federal property, provided that the costs are reasonable and the quality of the food is comparable to that currently provided. The regulations provide for two procedures to implement this priority. First, federal agencies, such as the military departments, may engage in direct, noncompetitive negotiations with a state licensing agency. Of the eight Randolph-Sheppard contracts we reviewed in detail, six had been awarded through direct negotiations with the state licensing agency. In most of the eight cases, the contract was a follow-on to an expiring food service contract. The second award procedure involves the issuance of a competitive solicitation inviting proposals from all potential food service providers, including the relevant state licensing agency. The solicitation will specify the criteria for evaluating proposals, such as management capability, past performance, and price, and DOD will use these criteria to evaluate the proposals received. When the competitive process is used, DOD policy provides for selecting the state licensing agency for award if its proposal is in the “competitive range.” Of the eight Randolph-Sheppard contracts we reviewed, only two involved a solicitation open to other food service providers, and there was no case in which more than one acceptable proposal was received such that DOD was required to determine a competitive range. The prices of contracts under the Randolph-Sheppard program are negotiated between DOD and the state licensing agency, regardless of whether DOD uses direct negotiations or seeks competitive proposals. Negotiations in either case typically begin with a pricing proposal submitted by the state licensing agency, and will then involve a comparison of the proposed price with the prices in previous contracts, an independent government estimate, or the prices offered by other competitors, if any. In some cases, DOD will seek the assistance of the Defense Contract Audit Agency (DCAA) in assessing various cost aspects of a proposal. All of the Randolph-Sheppard contracts we reviewed were generally firm, fixed price. Some had individual line items that provided for reimbursing the food service provider for certain costs incurred, such as equipment maintenance or replacing items. In most cases, the contract was for a base year, and provided for annual options (usually four) that may be exercised at the discretion of DOD. Of the 39 Randolph-Sheppard contracts within the scope of our review, the average price for the current year of the contract was about $6.5 million. Table 2 shows the 8 Randolph- Sheppard contracts in our sample with selected contract information. Under Part 8 of the FAR, the JWOD program is a mandatory source of supply, requiring DOD to award contracts to the listed nonprofit entity at fair market prices established by the Committee for Purchase. There is no further competition. Table 3 shows the 6 JWOD contracts in our sample with selected contract information. Compensation for Randolph-Sheppard blind vendors is computed differently from compensation paid to JWOD disabled workers. For the Randolph-Sheppard program, blind vendors’ compensation is generally based on a percentage of the profits generated by the dining facilities’ operations. Based on the 37 survey responses where we could determine the basis of how blind vendors’ compensation was computed, 34 reported that that the vendor’s compensation was computed either entirely, or in part, based on the profits generated by the dining facility contract. For compensation based entirely on the facilities’ profits, the blind vendor received from 51 to 65 percent of the profits. For those blind vendors that were compensated partially based on profits, their compensation was based on fixed fees, administrative fees or salaries, and a percentage of the profits. Where compensation was not based on profits, these three blind vendors received either a percentage of the contract value or a fixed base fee. Figure 4 shows the annual compensation received by blind vendors for military food services contracts, within specified ranges, and the average compensation for each range. As shown in figure 4, 15 of 38 Randolph-Sheppard blind vendors’ annual compensation was between $100,000 and $200,000. Overall, blind vendors working at DOD dining facilities received average annual compensation of about $276,500 per vendor. These figures are based on pretax earnings. We did not collect compensation information for employees of the blind vendors or employees of the teaming partners. For the JWOD program, for most workers—including those with and without a disability—the compensation is determined by either federal law or collective bargaining agreements. The Service Contract Act (SCA) was enacted to give employees of contractors and subcontractors labor standards protection when providing services to federal agencies. The SCA requires that, for contracts exceeding $2,500, contactors pay their employees, at a minimum, the wage rates and fringe benefits that have been determined by the Department of Labor to be prevailing in the locality where the contracted work is performed. However, the SCA hourly rate would not be used if there is a collective bargaining agreement that sets a higher hourly wage for selected workers. According to NISH, the collective bargaining hourly rates are, in general, 5 to 10 percent higher than the SCA’s wage rates. Of the six JWOD contracts in our sample, Holloman Air Force Base and the Marine Corps’ eastern and western regional contracts had collective bargaining agreements. For the three JWOD sites visited, we obtained an estimate of the average hourly wages, average hourly fringe benefits rates, and average number of hours worked and computed their annual wages. The average hourly wage for the three JWOD sites was $13.15 including fringe benefits. Table 4 shows the average annual wages that an employee earned. Another law that can affect the disabled worker’s wages is section 14(c) of the Fair Labor Standards Act, which allows employers to pay individuals less than the minimum wage (called special minimum wage rates) if they have a physical or mental disability that impairs their earning or productive capacity. For example, if a 14(c) worker’s productivity for a specific job is 50 percent of that of experienced workers who do not have disabilities that affect their work, and the prevailing wage paid for that job is $10 dollars per hour, the special minimum wage rate for the 14(c) worker would be $5 dollars per hour. None of the three JWOD sites we visited applied the special minimum wage for any of their disabled workers. The Randolph-Sheppard and JWOD programs have a common goal of serving individuals who are blind or have severe disabilities, and who are generally underrepresented in the workforce. However, these programs operate differently regarding how contracts are awarded and priced, and are designed to serve distinct populations through different means— particularly with respect to compensation for program participants. This is true for contracts with military dining facilities. The blind vendors who participate in the Randolph-Sheppard program seek to become entrepreneurs by gaining experience managing DOD dining facilities. In this respect, although most of these vendors require the assistance of a private food service teaming partner, they are compensated for managing what can be large, complicated food service operations. By contrast, because the participants of the JWOD program perform work activities that require less skill and experience, and who might otherwise not be able to secure competitive employment, they are compensated at a much lower rate than the Randolph-Sheppard vendors. In this regard, it is apparent that the two programs are designed to provide very different populations with different types of assistance, and thus, it is difficult to directly compare them, particularly with respect to compensation. We provided a draft of this report to the Committee for Purchase, the Department of Defense, and the Department of Education for review and comment. The Committee for Purchase had no comments. DOD concurred with the draft and also provided technical comments for our consideration. We considered all of DOD’s technical comments and revised the draft as appropriate. The DOD comment letter is attached as appendix II. The Department of Education provided clarifications and suggestions in a number of areas. First, Education was concerned about comparing the earnings of the blind vendors under the Randolph-Sheppard program and the compensation provided to the food service workers under the JWOD program. The agency suggested we compare the earnings of the blind vendors with the earnings of employees of the JWOD nonprofit agencies who perform similar management functions. We agree that there are significant differences in their responsibilities, but we were required to report on the compensation of the “beneficiaries” of the two programs, which are blind managers for the Randolph-Sheppard program and hourly workers for the JWOD program. Our report highlights these differences. Our report also highlights in a number of places the difficulty in comparing the compensation of the two groups of beneficiaries. We were not required to report on the earnings of the management personnel of the nonprofit agencies, and we did not collect this information. Second, Education urged that we fully describe the permitted uses of the set-aside fees charged by the state licensing agencies, and that we recognize that there is a similar assessment under the JWOD program. We have revised the report to point out that the Randolph-Sheppard set-aside may be used to fund the operation of the state licensing agencies. We also added language to a footnote to table 3 to recognize that the JWOD contract amounts include a fee that is used to fund the operations of the central nonprofit agency. Third, Education questions our description of the price negotiations that occur between DOD and the state licensing agencies. We believe our report is both clear and accurate on this point as written. In addition, DOD did not have any comments or questions about how we described price negotiations for the Randolph-Sheppard program. Fourth, Education questioned our discussion of the numbers of persons with disabilities employed under the two programs. Specifically, Education pointed out that the requirement under the JWOD program that at least 75 percent of the direct labor hours be performed by persons with disabilities applies in the aggregate to all work performed by a nonprofit entity, not at the contract level. We have revised the report to reflect this. And finally, Education sought clarification concerning the extent commercial food service companies are used as teaming partners under the Randolph-Sheppard program or as subcontractors under the JWOD program. We have revised figures 2 and 3 of the report to more accurately reflect the use of these companies. The comment letter from Education is attached as Appendix III. We will send copies of this report to interested congressional committees, the Secretary of Defense, the Secretary of Education, and the Chairperson of the Committee for Purchase, as well as other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact George Scott at (202) 512-7215 or [email protected] or William Woods at (202) 512-8214 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. To accomplish our research objectives, we interviewed officials from the Department of Defense (DOD), the Department of Education, the Committee for Purchase, and organizations representing both the Randolph-Sheppard and Javits-Wagner-O’Day (JWOD) programs. We also reviewed pertinent documents and regulations governing both programs. We reviewed a sample of 14 contracts—8 Randolph-Sheppard contracts and 6 JWOD contracts. For these contracts, we requested the source selection memorandum, the acquisition plan, the basic contract, and the statement of work. For two of these contracts, the Randolph-Sheppard prime contractor for full-food services subcontracted with a JWOD nonprofit agency for support services. We determined that it was not feasible to review a representative sample of contracts based on our preliminary work, which indicated wide variations in how the two programs are structured and how the Randolph-Sheppard program is administered from state to state. For these reasons, we selected a number of contracts to review in order to ensure representation of both programs, as well as ensure a balance of contracts based on dollar value, size of military facility, branch of the military, and geographic location. As the sample was not representative, results of our review cannot be projected to the entire universe of contracts. In addition, we visited the military installation for 5 of the 14 contracts in our sample in order to observe dining facilities and their operations, as well as interview pertinent officials and staff, including the blind vendor or JWOD agency management whenever possible. Again, these five locations were selected to ensure representation of both programs, as well as variation in geographic location, contract size, and military branch. In terms of beneficiary compensation, we limited our review to Randolph-Sheppard blind vendors and JWOD workers. For the JWOD program, we obtained average hourly wages, average hourly fringe benefits, and average total hours worked during the year for JWOD employees at selected sites. We did not obtain compensation amounts for the managerial employees for any JWOD nonprofit agencies. To obtain information on the relationships between state licensing agencies and blind vendors, we conducted a survey of the 24 state licensing agencies we determined to have Randolph-Sheppard military dining contracts. We asked questions regarding the roles and responsibilities of blind vendors, the vendor’s relationship with the state licensing agencies, and the role played by teaming partners. We administered this survey between April and July 2007. We pretested this survey with program directors and modified the survey to take their comments into account. All 24 state licensing agencies responded to our survey for a response rate of 100 percent and provided information for 39 military dining facilities contracts. Additionally, we requested information for the 40 blind vendors with military dining contracts to determine their annual compensation. For the 39 contracts, there were 40 blind vendors as one contract utilized two vendors. We received compensation information for 38 of the 40 blind vendors. Jeremy D. Cox (Assistant Director), Richard Harada (Analyst-in-Charge), Daniel Concepcion, Rosa Johnson, and Sigurd Nilsen made significant contributions to all aspects of this report. In addition, Susannah Compton and Lily Chin assisted in writing the report and developing graphics. John Mingus provided additional assistance with graphics. Walter Vance assisted in all aspects of our survey of state licensing agencies as well as providing methodological support. Doreen Feldman, Daniel Schwimer, and Alyssa Weir provided legal support. Federal Disability Assistance: Stronger Federal Oversight Could Help Assure Multiple Programs’ Accountability. GAO-07-236. Washington, D.C.: January 26, 2007.
Randolph-Sheppard and Javits-Wagner-O'Day (JWOD) are two federal programs that provide employment for persons with disabilities through federal contracts. In 2006, participants in the two programs had contracts with the Department of Defense (DOD) worth $465 million annually to provide dining services at military dining facilities. The 2007 National Defense Authorization Act directed GAO to study the two programs. This report examines (1) differences in how the Randolph-Sheppard and JWOD programs provide food services for DOD and (2) differences in how contracts are awarded, prices are set, and program beneficiaries (i.e. persons with disabilities) are compensated. GAO interviewed program officials, conducted a survey of states with Randolph-Sheppard programs, and reviewed eight Randolph-Sheppard and six JWOD contracts. The Randolph-Sheppard and JWOD programs use different procedures to provide food services to DOD. In Randolph-Sheppard, states act as prime contractors, and train and license blind individuals to act as managers of dining facilities. In most cases, the blind vendor relies on a food service company--known as a teaming partner--to assist in operations, provide expertise, and help with start-up costs. About half of the blind vendors are required to employ other persons with disabilities. JWOD is administered by an independent federal agency called the Committee for Purchase from People Who are Blind or Severely Disabled (Committee for Purchase). The Committee for Purchase engages a central nonprofit agency to match DOD's needs with services provided by local nonprofit agencies. Most of the individuals working for these local nonprofit agencies are employed in less skilled jobs such as serving food or washing dishes. The Randolph-Sheppard and JWOD programs differ significantly in the way DOD dining contracts are awarded, how prices are set, and how participants are compensated. For Randolph-Sheppard, DOD awards contracts to the states either through direct negotiations or competition with other food service companies. In either case, DOD and the states negotiate the prices based on factors such as historical prices and independent government estimates. Under JWOD, competition is not a factor because DOD is required to purchase services it needs from a list maintained by the Committee for Purchase, which establishes fair market prices for these contracts. In terms of compensation, Randolph-Sheppard blind vendors generally received a percentage of contract profits, averaging about $276,500 per vendor annually. JWOD beneficiaries are generally paid hourly wages according to rules set by the federal government. For the three sites we visited, we estimate that beneficiaries received an average wage of $13.15 per hour, including fringe benefits. Given the differences in the roles of the beneficiaries of these two programs, comparisons of their compensation have limited value.
Unlike conventional, or subtractive, manufacturing processes—such as drilling or milling—that create a part or product by cutting away material from a larger piece, additive manufacturing builds a finished piece in successive layers, generally without the use of molds, casts, or patterns. Additive manufacturing can potentially lead to less waste material in the manufacturing process, as shown in figure 1. ASTM International, an international standards development organization, has identified seven categories of additive manufacturing processes to group the different types of technologies used, as shown in table 1. According to DOD officials, the first six of the categories described are the ones of greatest use to DOD. In August 2012, as part of a presidential initiative focused on advanced manufacturing, America Makes—the National Additive Manufacturing Innovation Institute—was established as a public-private partnership between federal government agencies (including DOD), private industry, and universities to collaboratively address additive manufacturing challenges; accelerate the research, development, and demonstration of additive manufacturing; and transition that technology to the U.S. manufacturing sector. According to the government program manager of America Makes, funding to establish America Makes consisted of a federal government investment of $55 million (fiscal years 2012 through 2017), and it is managed by the U.S. Air Force Research Laboratory. The official also stated that America Makes receives additional funding through publicly and privately funded projects. Multiple DOD components—at the OSD, military department (Army, Navy, and Air Force), Defense Logistics Agency, and Defense Advanced Research Projects Agency levels—are involved in additive manufacturing efforts. At the OSD-level, the Office of the Assistant Secretary of Defense for Research and Engineering develops policy and provides guidance for all DOD activities on the strategic direction for defense research, development, and engineering priorities and coordinates with the Office of the Deputy Assistant Secretary of Defense for Manufacturing and Industrial Base Policy to leverage independent research and development activities, such as additive manufacturing research activities. The Defense Advanced Research Projects Agency’s Defense Sciences Office and the military departments—through the U.S. Army Research, Development and Engineering Command (RDECOM); the Office of Naval Research; and the U.S. Air Force Research Laboratory—have laboratories to conduct additive manufacturing research activities. According to Navy officials, the military depots use additive manufacturing for a variety of applications using various material types. These efforts largely include polymer, metal, and ceramic-based additive manufacturing processes for rapid prototyping, tooling, repair, and development of non- critical parts. The DOD components lead and conduct activities related to several types of technology research and development and advancements. Additive manufacturing is one of these activities, and the components are involved to the extent that some of the broader activities include additive manufacturing. See appendix II for a more detailed description of the key DOD components involved with additive manufacturing. In October 2014, with the assistance of the National Academies, we convened a forum of officials from federal government agencies, including DOD; private-sector organizations; academia; and non-governmental organizations to discuss the use of additive manufacturing for producing functional parts, including opportunities, key challenges, and key considerations for any policy actions that could affect the future use of additive manufacturing for producing such parts. In June 2015 we issued a report summarizing the results of that forum. During the forum, participants noted that the use of additive manufacturing has produced benefits such as reduced time to design and produce functional parts; the ability to produce complex parts that cannot be made with conventional manufacturing processes; the ability to use alternative materials with better performance characteristics; and the ability to create highly customized, low-volume parts. Furthermore, forum participants identified as a key challenge the need to ensure the quality of functional parts—for example, ensuring that manufacturers can repeatedly make the same part and meet precision and consistency performance standards on both the same machine and different machines. During the forum, participants also indicated that before a product can be certified, manufacturers must qualify the materials and processes used to make the part, which involves manufacturers conducting tests and collecting data under very controlled conditions. For example, DOD requires that parts it purchases, such as aircraft engine parts, meet specific standards or performance criteria. Manufacturers might need to have these parts certified to meet DOD’s standards. According to participants in the forum, the National Institute of Standards and Technology is funding research to provide greater assurance with regard to the quality of parts produced using additive manufacturing. It is also leading efforts on additive manufacturing standards through ASTM International’s committee on additive manufacturing, which was formed in 2009. Participants also identified some future applications for additive manufacturing, including the construction of tooling for conventional manufacturing lines, for enhancing education, and for enhancing supply chain management. DOD in its May 2014 briefing document on additive manufacturing addressed the three directed elements: that is, (1) potential benefits and constraints of additive manufacturing; (2) how the additive manufacturing process could or could not contribute to DOD missions; and (3) what technologies being developed at America Makes are being transitioned for DOD use. In summary, we found the following: First, the briefing document noted potential benefits and constraints. For example, DOD noted a potential benefit to be derived in some cases from additive manufacturing yielding lighter parts for use in aircraft, for instance; thereby potentially lowering fuel costs. DOD also noted a potential constraint reflected in the fact that DOD has yet to establish qualification and certification protocols for additively manufactured parts. Second, the briefing document noted potential contributions to DOD’s mission. For example, DOD noted that additive manufacturing may be capable of producing equivalent replacements for obsolete parts. Third, the briefing document identified America Makes projects that DOD anticipated would be transitioned for DOD use. For example, DOD noted a collaborative effort involving Pennsylvania State University’s Applied Research Lab, Pratt & Whitney, Lockheed Martin, and General Electric Aviation on thermal imaging for process monitoring and control of additive manufacturing. DOD noted that this project would help enable DOD to ensure process and part repeatability, and would reduce the costs and time for post-process inspection. As shown in table 2, the DOD briefing document noted additional examples of potential benefits and constraints; potential contributions to DOD’s mission; and some other America Makes projects that DOD anticipates will be transitioned for its own use. DOD has taken steps to implement additive manufacturing to improve performance and combat capability, as well as achieve associated cost savings. We obtained information on multiple efforts being conducted across DOD components. For example, the Army used additive manufacturing, instead of conventional manufacturing, to prototype aspects of a Joint Service Aircrew Mask to test a design change, and it reported thousands of dollars saved in design development and potential combat capability improvements. According to a senior Navy official, to improve performance, the Navy additively manufactured circuit card clips for servers on submarines, as needed, because the original equipment manufacturer no longer produced these items. This official also stated that the Navy is researching ways to produce a flight critical part by 2017. According to a senior Air Force official, the Air Force is researching potential performance improvements that may be achieved by embedding devices such as antennas within helmets through additive manufacturing that could enable improved communications. According to Defense Logistics Agency officials, they have taken steps to implement the technology by additively manufacturing the casting cores for blades and vanes used on gas turbine engines. According to a senior Walter Reed National Military Medical Center official, the Center has used additive manufacturing to produce cranial implants for patients. See additional information on DOD’s additive manufacturing efforts below, listed by component. DOD uses additive manufacturing for design and prototyping and for some production—for example, parts for medical applications— and it is conducting research to determine how to use the technology for new applications, such as printing electronic components for circuitry and antennas. DOD is also considering ways in which it can use additive manufacturing in supply chain management, including for repair of equipment and production of parts in the field so as to reduce the need to store parts; for production of discontinued or temporary parts as needed for use until a permanent part can be obtained; and for quickly building parts to meet mission requirements. According to DOD officials, such usage will enable personnel in the field to repair equipment, reduce equipment down-time, and execute their missions more quickly. The U.S. Army RDECOM Armament Research, Development and Engineering Center, according to Army officials, plans to achieve performance improvements by developing an additively manufactured material solution for high demand items such as nuts and bolts, providing the engineering analysis and qualification data required to make these parts by means of additive manufacturing capability at the point of need in theater. These officials stated that this solution could potentially reduce the logistics burden on a unit and improve its mission readiness, thus enabling enhanced performance. The U.S. Army RDECOM Armament Research, Development and Engineering Center, in conjunction with the Defense Logistics Agency, evaluated high-demand parts in the Afghanistan Theater of Operations and determined that nuts and bolts were high demand parts that were often unavailable due to the logistical challenges of shipping parts. According to Army officials, additive manufacturing offers customers the opportunity to enhance value when the lead time needed to manufacture and acquire a part can be reduced. According to these officials, in military logistics operations in theater, the manufacture of parts to reduce the lead time to acquire a part is of paramount importance. As of August 2015 the Center had additively manufactured several nuts and bolts to demonstrate that they can be used in equipment (see figure 2), and it plans to fabricate more of these components for functional testing and qualification. The officials also stated that this testing will verify that the additively manufactured components can withstand the rigors of their intended applications. The U.S. Army RDECOM Edgewood Chemical Biological Center prototyped aspects or parts of a Joint Service Aircrew Mask ( as shown in figure 3) via additive manufacturing to test a design change, which officials stated has resulted in thousands of dollars saved and potential combat capability improvements. A new mask ensemble was built using these parts and was worn by pilots to evaluate comfort and range of vision. Once confirmed, the parts were produced using conventional manufacturing. Since this example was one in a prototyping phase, only low quantities were needed for developmental testing, and additive manufacturing combined with vacuum silicone/urethane casting allowed the Army to obtain a quantity of parts that was near production level. According to Army officials, if conventional production level tools (also called injection molds) had been developed and used in this prototyping phase, costs might have ranged from $30,000-$50,000, with a 3- to 6-month turnaround. These officials stated that additive manufacturing and urethane casting comprised a fraction of the cost—approximately $7,000–$10,000— with a 2- to 3-week turnaround. Had the Army alternatively developed a production tool at this proof-of-concept phase, time and financial investment might have been wasted if the concept had to be changed or started over from the beginning of the design phase, according to the officials. The U.S. Army RDECOM Edgewood Chemical Biological Center achieved combat capability improvements by designing holders (as shown in figure 4), through additive manufacturing, to carry pieces of sensor equipment in the field, according to Army officials. The Center coordinated with the U.S. Army Research Laboratory to develop the holder to carry a heavy hand-held improvised explosive device detection sensor. According to Army officials, the lab wanted a holder that would cradle the handle so as to distribute more weight to the soldier’s vest and back rather than confining it to the soldier’s forearm. Officials at the Center stated that they had additively manufactured many prototypes that were tested by soldiers at various locations around the country within 1 to 2 weeks. According to Army officials, after achieving positive testing results the Center used additive manufacturing to produce the molds that otherwise would have added weeks or months to the process via conventional manufacturing. The final products—10,000 plastic holders—were then produced at the Center through conventional manufacturing. The Army Rapid Equipping Force achieved combat capability improvements by using additive manufacturing, as part of its expeditionary lab capability, to design valve stem covers for a military vehicle, according to Army officials. An Army unit had experienced frequent failures due to tire pressure issues on its Mine- Resistant Ambush Protected vehicles caused by exposed valve stems; for example, during missions, the tires would deflate when the valve stem was damaged by rocks or fixed objects. The additive manufacturing interim solution was developed in just over 2 weeks, because the additive manufacturing process allowed them to prototype a solution more quickly, according to Army Rapid Equipping Force officials. As shown in figure 5, the Army additively manufactured prototypes for versions 1 through 4 of the covers before a final part was produced in version 5 through conventional manufacturing processes. The Army Rapid Equipping Force also achieved combat capability improvements, through its expeditionary lab, by producing prototypes of mounting brackets using additive manufacturing, according to Army officials. Army soldiers using mine detection equipment required illumination around the sensor sweep area during low visibility conditions in order to avoid impact with unseen objects resulting in damage to the sensor. Using additive manufacturing, a mounting bracket was prototyped for attaching flashlights to mine detectors in several versions, as shown in figure 6. According to Army officials, due to requests exceeding the expeditionary lab’s production capability, the Army coordinated with a U.S. manufacturer to additively manufacture 100 mounting brackets at one-fourth the normal cost. Tobyhanna Army Depot achieved performance improvement by using additive manufacturing to produce dust caps for radios, according to Army officials, as shown in figure 7. These officials stated that a shortage of these caps had been delaying the delivery of radios to customers. Getting the part from a vendor would have taken several weeks, but the depot additively manufactured 600 dust caps in 16 hours. According to the depot officials, the dollar savings achieved were of less importance than the fact that they were able to meet their schedule. The Navy is increasingly focused on leveraging additive manufacturing for the production of replacement parts to improve performance, according to Navy officials. When the original equipment manufacturer was no longer producing these parts, the Navy used additive manufacturing to create a supply of replacement parts to keep the fleet ready. This was the case for the Naval Undersea Warfare Center-Keyport, which used additive manufacturing to replace a legacy circuit card clip for servers installed on submarines, as needed (see figure 8). The Navy installed a 3D printer aboard the USS Essex to demonstrate the ability to additively develop and produce shipboard items such as oil reservoir caps, drain covers, training aids, and tools to achieve performance improvements, according to a senior Navy official (see figure 9). According to Navy officials, additive manufacturing is an emerging technology and shipboard humidity, vibration, and motion may create variances in the prints. Navy officials also stated that while there is not a structured plan to install printers on all ships, it is a desired result and vision to have the capability on the fleet. These officials stated that the Navy plans to install 3D printers on two additional ships. The U.S. Air Force Research Laboratory, according to a senior Air Force official, is researching potential performance improvements that may be achieved by (1) additive manufacturing of antennas and electronic components; and (2) embedding devices (such as antennas) within helmets and other structures through additive manufacturing, as shown in figure 10, thereby potentially enabling improved communication. The laboratory has a six-axis printing system that has demonstrated the printing of antennas on helmets and other curved surfaces, according to the official. The official also stated that the laboratory conducts research and development in materials and manufacturing in order to advance additive manufacturing technology such that it can be used affordably and confidently for Air Force and DOD systems. Additionally, according to Air Force officials, the Air Force sustainment organizations use additive manufacturing for tooling and prototyping. According to the December 2014 DOD Manufacturing Technology document, the Defense Logistics Agency projected cost savings of 33-50 percent for additively manufacturing casting core tooling, as shown in figure 11. The Defense Logistics Agency—working with industry, including Honeywell, and leveraging the work of military research labs—helped refine a process to additively manufacture the casting cores for engine airfoils (blades and vanes) used on gas turbine engines, according to Defense Logistics Agency officials. According to these officials, printing these casting cores will help reduce the cost and production lead times of engine airfoils, especially when tooling has been lost or scrapped or when there are low quantity orders for legacy weapon systems. The Walter Reed National Military Medical Center achieved performance improvements by additively manufacturing items that include customized cranial plate implants and medical tooling and surgical guides, according a senior official within the Center. According to the official, additive manufacturing offers a more flexible and applicable solution to aid surgeons and provide benefits to patients. Since 2003, according to the official, the Walter Reed National Military Medical Center has additively manufactured more than 7,000 medical models, more than 300 cranial plates, and more than 50 custom prosthetic and rehabilitation devices and attachments, as well as simulation and training models. The official stated that using additive manufacturing enables each part to be made specifically for the individual patient’s anatomy, which results in a better fit and an implant that is more structurally sound for a longer period of time, which, in turn, leads to better medical outcomes with fewer side effects. Furthermore, the official stated that additive manufacturing has been used for producing patient-specific parts, such as cranial implants, in 1 to 5 days, and these parts are being used in patients. See figure 12. DOD uses various mechanisms to coordinate on additive manufacturing efforts, but it does not systematically track components’ efforts department-wide. DOD components share information regarding additive manufacturing through mechanisms such as working groups and conferences that, according to DOD officials, provide opportunities to discuss challenges experienced in implementing additive manufacturing—for example, qualifying materials and certifying parts. However, DOD does not systematically track additive manufacturing efforts, to include (1) all projects, henceforth referred to as activities, performed and resources expended by DOD; and (2) results of their activities, including actual and potential performance and combat capability improvements, cost savings, and lessons learned. DOD has not designated a lead or focal point at the OSD level to systematically track and disseminate the results of these efforts, including activities and lessons learned, department-wide. Without designating a lead to track information on additive manufacturing efforts, which is consistent with federal internal control standards, DOD officials may not obtain the information they need to leverage ongoing efforts. DOD components use various mechanisms to coordinate information on successes and challenges of additive manufacturing along with other aspects of additive manufacturing. These mechanisms include coordination groups, DOD collaboration websites (such as milSuite), conferences, and informal meetings to coordinate on additive manufacturing-related efforts. Some of these groups or meetings focus on broad issues, such as manufacturing technologies in general (in which additive manufacturing may be included), and others focus solely on additive manufacturing. Participants in these groups have included officials from OSD, the military departments, other governmental agencies, private industry, and universities that support the research and development and operational use of additive manufacturing. DOD officials explained that these groups and conferences provide opportunities to discuss challenges experienced by the components in implementing additive manufacturing, such as the challenges of qualifying materials and certifying parts, and to discuss the efforts they are making to address these challenges, as well as other aspects of additive manufacturing. See table 3 for examples of eight coordination groups we identified that meet to discuss ongoing additive manufacturing efforts, including ways to address technical challenges. Furthermore, DOD components participate in defense manufacturing conferences and defense additive manufacturing symposiums; informal meetings; and America Makes discussions, known as program management reviews. We observed the September 2014 America Makes program management review, during which representatives from the government, private industry, and academia discussed the status of the America Makes research projects and their additive manufacturing efforts. We also observed an additive manufacturing meeting that included participants from OSD, the Army, the Navy, and the Defense Logistics Agency to discuss the status of their ongoing additive manufacturing efforts and collaboration opportunities. For example, the Navy and the Defense Logistics Agency discussed their efforts to survey existing parts that would be candidates for additive manufacturing. The officials stated that they are willing to share information but are focusing on their service- specific efforts. Additionally, DOD participates in the Government Organization for Additive Manufacturing (GO Additive), which is an informal, government-wide voluntary-participation group. The purpose of the group is, among other things, to facilitate collaboration among individuals from federal government organizations, such as DOD, that have an interest in additive manufacturing. According to Air Force officials, the group may develop a list of qualified materials and certified parts. Although DOD components use various mechanisms to coordinate information on additive manufacturing, DOD does not systematically track the components’ additive manufacturing efforts department-wide. Specifically, DOD does not systematically track additive manufacturing efforts, to include (1) all activities performed and resources expended by DOD, including equipment and funding amounts; and (2) results of their activities, including actual and potential performance and combat capability improvements, cost savings, and lessons learned. Standards for Internal Control in the Federal Government state that it is important for organizations to have complete, accurate, and consistent data to inform policy, document performance, and support decision making. The standards also call for management to track major agency achievements, and to communicate the results of this information. In addition, our past work has identified practices for enhancing and sustaining agency coordination efforts that include, among other things, designating leadership, which is a necessary element for a collaborative working relationship. However, DOD officials whom we interviewed could not identify a specific DOD entity that systematically tracked all activities or resources across the department, including equipment and funding amounts, related to additive manufacturing. Further, while Army, Navy, and Air Force Manufacturing Technology program officials provided us a list of their respective additive manufacturing activities and some funding information, variances in the types of information provided caused it not to be comparable across the services. Since no one DOD entity, such as OSD, systematically tracks all aspects of additive manufacturing, DOD officials could not readily tell us the activities underway or the amount of funding being used for DOD’s additive manufacturing efforts. According to an OSD official within the Office of Manufacturing and Industrial Base Policy, the department does not identify investments of additive manufacturing in the budget exhibits to this level of detail. The official stated that the department identifies overall manufacturing technology investments, but it does not specifically break out additive manufacturing. In addition to the research and development efforts, the official stated that DOD has ongoing additive manufacturing activities within the operational communities, such as military depots and arsenals, and it does not systematically track these communities either. Additionally, while DOD components share information on the additive manufacturing equipment they own, DOD does not systematically track these machines to ensure that the components are aware of each other’s additive manufacturing equipment. DOD has additive manufacturing machines whose costs range from a few thousand dollars to millions of dollars. In a constrained budget environment, it is also important to leverage these resources. According to officials within the U.S. Army RDECOM, through coordination groups, such as that command’s community of practice, officials share and understand each other’s equipment and capabilities. In addition, according to these officials, the Navy and Air Force have provided information to the Army regarding their respective departments’ equipment. According to Army and Navy officials, the Army and Navy also have equipment lists posted on a DOD collaboration website called milSuite. According to an official at the U.S. Air Force Research Laboratory, the Air Force does not have an official inventory listing of additive manufacturing equipment. However, the official added that a team has accomplished a recent tasking and visits to the Air Logistics complexes to determine the equipment and capabilities available and in use. Furthermore, DOD does not systematically track actual or potential performance and combat capability improvements, cost savings, or lessons learned. DOD component officials we interviewed have shared— within their respective components and to a lesser degree with other components—information on their individual performance and combat capability improvements, as well as on some cost savings attributable to additive manufacturing. For example, according to Army Rapid Equipping Force officials, they participate in a community of practice to share their lessons learned so that others can be informed about the needs of end users when developing their research priorities. The various DOD components are at different stages of research and implementation. However, DOD does not have a systematic process to obtain and disseminate the results and lessons learned across the components. Without this information, the department may not be able to leverage the components’ respective experiences. U.S. Army RDECOM officials agreed that it is important to improve cross- communication among the services and agencies, to avoid having to re- invent advances while they continue to expand the implementation of these technologies and capabilities. The officials added that the Materials and Manufacturing Processes Community of Interest already reports to the Office of the Assistant Secretary of Defense for Research and Engineering and to the DOD Science and Technology executives on the science and technology funding associated with materials and manufacturing. Therefore, the Army believes that DOD already has oversight and awareness. According to its chairperson, the Materials and Manufacturing Processes Community of Interest (a group that comprises eight technical teams) performs some level of activity in additive manufacturing, but it does not have a team that focuses solely on additive manufacturing. The chairperson added that this community of interest does not systematically track all aspects of additive manufacturing, such as medical, and that the information that is tracked and communicated to OSD is rolled up to a high level and submitted to the Office of the Assistant Secretary of Defense for Research and Engineering. An official within that office agreed that additional coordination of additive manufacturing efforts across the department would be helpful. The official stated that the office does not track all aspects of additive manufacturing. DOD does not systematically track all department-wide additive manufacturing efforts because the department has not designated a lead or focal point at a senior level, such as OSD, to oversee the development and implementation of an approach to department-wide coordination. Specifically, the department has not established a lead to develop and implement an approach for systematically (1) tracking department-wide activities and resources, including funding and an inventory of additive manufacturing equipment; and results of these activities, such as additive manufacturing performance and combat capability improvements and cost savings, along with lessons learned; and (2) disseminating the results of these activities, and an inventory of additive manufacturing equipment. A senior official within the Office of Manufacturing and Industrial Base Policy was aware of the various coordination groups. The official also saw value in collecting certain types of additive manufacturing information. We recognize that while additive manufacturing has been in existence since the 1980s, it is still in its early stages as compared with the techniques of conventional manufacturing, especially with respect to producing critical parts such as those for aircraft. As the technology evolves, it is important for OSD to systematically track and disseminate the results of these additive manufacturing efforts department-wide. Without designating a lead or focal point responsible for developing an approach for systematically (1) tracking department-wide activities and resources, and results of these activities; and (2) disseminating, department-wide, the results of these activities and an inventory of additive manufacturing equipment, DOD officials may not obtain the information they need to leverage resources and ongoing experiences of the various components. Additive manufacturing has been in existence since the 1980s, and DOD has begun looking toward utilizing it to make existing product supply chains more efficient by enabling on-demand production, which could reduce the need to maintain large product inventories and spare parts; and enabling the production of parts and products closer to the location of their consumers, thereby helping DOD to achieve its missions. The technology is in its relative infancy, and it may be years or decades before it can achieve levels of confidence comparable to those available from conventional manufacturing processes. Across the department the various DOD components are engaged in activities and are expending resources in their respective efforts to determine how to use additive manufacturing to produce critical products. However, DOD does not systematically track and disseminate the results of additive manufacturing efforts department-wide, nor has it designated a lead to coordinate these efforts. As a result, DOD may not have the information it needs to leverage resources and lessons learned from additive manufacturing efforts and thereby facilitate the adoption of the technology across the department. To help ensure that DOD systematically tracks and disseminates the results of additive manufacturing efforts department-wide, we recommend that the Secretary of Defense direct the following action: Designate a lead or focal point, at the OSD level, responsible for developing and facilitating the implementation of an approach for systematically tracking and disseminating information. The lead or focal point should, among other things, track department-wide activities and resources, including funding and an inventory of additive manufacturing equipment; and results of these activities, such as additive manufacturing performance and combat capability improvements and cost savings, along with lessons learned; and disseminate the results of these activities, and an inventory of additive manufacturing equipment. We provided a draft of this report to DOD for review and comment; the department provided technical comments that we considered and incorporated as appropriate. DOD also provided written comments on our recommendation, which are reprinted in appendix III. In commenting on this draft, DOD concurred with our recommendation that DOD designate an OSD lead or focal point to be responsible for developing and implementing an approach for systematically tracking department-wide activities and resources, and results of these activities; and disseminating these results, and an inventory of additive manufacturing equipment, to facilitate adoption of the technology across the department. In response to this recommendation, DOD stated that within 90 days the department will make a determination and designation of the appropriate lead or focal point within OSD to be responsible for developing and facilitating the implementation of an approach for systematically tracking and disseminating information on additive manufacturing within the department. We are sending copies of this report to appropriate congressional committees; the Secretary of Defense; the Secretaries of the Army, Navy, and Air Force, and the Commandant of the Marine Corps; the directors of Defense Logistics Agency and Defense Advanced Research Projects Agency; the Assistant Secretaries of Defense for Research and Engineering, and Health Affairs; Deputy Assistant Secretaries of Defense for Manufacturing and Industrial Base Policy, and Maintenance Policy and Programs; and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-5257 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made contributions to this report are listed in appendix IV. The Department of Defense (DOD) provided its briefing document to GAO on July 30, 2014. To determine the extent to which the briefing document to the Senate Armed Services Committee (henceforth referred to as “the Committee”) addresses the three directed elements, two GAO analysts concurrently assessed DOD’s May 2014 briefing document to determine whether it included the following Committee-directed elements: (1) potential benefits and constraints of additive manufacturing, (2) how the additive manufacturing process could or could not contribute to DOD missions, and (3) what technologies being developed at America Makes are being transitioned for DOD use. The analysts were consistent in their respective assessments of whether the briefing included the elements and therefore it was not necessary for a third analyst to resolve any differences. We assessed the briefing document, with the recognition that it was not meant to be a stand-alone document, but rather accompanied an oral briefing. We met with officials from the Office of Manufacturing and Industrial Base Policy, America Makes, and the military services to determine the extent to which they were involved in creating the briefing document and to obtain additional information about additive manufacturing. We also shared with the DOD officials, including Office of Manufacturing and Industrial Base Policy officials, our preliminary assessment of DOD’s briefing document to obtain their comments. To determine the extent to which DOD has taken steps to implement additive manufacturing to improve performance, improve combat capability, and achieve cost savings, we reviewed DOD planning documents, such as the December 2014 DOD Manufacturing Technology Program report and briefing reports documenting the status of DOD’s additive manufacturing efforts, as well as examples of any actual or potential performance and combat capability improvements, and examples of actual or potential cost savings. We also interviewed officials within the military services, Defense Logistics Agency, and Walter Reed National Military Medical Center to further discuss any current and potential applications of additive manufacturing, and any improvements and cost savings associated with using the technology. We did not review efforts related to additive manufacturing conducted by contractors for DOD. To determine the extent to which DOD uses mechanisms to coordinate and systematically track additive manufacturing efforts across the department, we reviewed DOD coordination-related documents, such as charters and briefing slides, summarizing the purpose and results of any current DOD efforts related to advancing the department’s use of additive manufacturing—that is, efforts by the Office of the Secretary of Defense (OSD), Defense Logistics Agency, Defense Advanced Research Projects Agency, and the services. We reviewed GAO’s key considerations for implementing interagency collaborative mechanisms, such as designating leadership, which is a necessary element for a collaborative working relationship. We identified examples of coordination groups that DOD participates in to discuss ongoing additive manufacturing efforts. While we did not assess these groups to determine whether there were any coordination deficiencies, we made some observations based on GAO’s reported collaborative mechanisms and practices for enhancing and sustaining these efforts. We also reviewed the Standards for Internal Control in the Federal Government, which emphasizes the importance of top-level management tracking the various components’ achievements, to assess the extent to which DOD systematically tracks additive manufacturing efforts department-wide. Additionally, we discussed with OSD, Army, Navy, Air Force, Defense Logistics Agency, and Defense Advanced Research Projects Agency officials (1) any actions that have been taken for coordinating additive manufacturing efforts across the department, and (2) the extent to which DOD systematically tracks additive manufacturing efforts. Tables 4 and 5 present the DOD and non-DOD organizations we met with during our review. We conducted this performance audit from July 2014 to October 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Office of the Secretary of Defense (OSD) Office of the Under Secretary of Defense for Acquisition, Technology and Logistics, reporting to the Secretary of Defense, is responsible for all matters relating to departmental acquisition systems, as well as research and development, advanced technology, and developmental test and evaluation, among other things. The OSD Office of the Assistant Secretary of Defense for Research and Engineering, reporting to the Under Secretary of Defense for Acquisition, Technology and Logistics, is responsible for providing science and engineering integrity leadership throughout DOD and facilitating the sharing of best practices to promote the integrity of DOD scientific and engineering activities. According to DOD senior officials, the Materials and Manufacturing Processes community of interest is one of 17 department-wide coordination groups organized by the Office of the Assistant Secretary of Defense for Research and Engineering to provide broad oversight of the DOD components’ efforts in the Science and Technology areas for which the department has responsibilities. The senior officials added that this community of interest does not track all aspects of additive manufacturing and that the information that is tracked and communicated to the Office of the Assistant Secretary of Defense for Research and Engineering is rolled up to a high level. The OSD Office of the Deputy Assistant Secretary of Defense for Maintenance Policy and Programs provides the functional expertise for centralized maintenance policy and management oversight for all weapon systems and military equipment maintenance programs and related resources within DOD. The OSD Office of the Deputy Assistant Secretary of Defense for Manufacturing and Industrial Base Policy, reporting to the Under Secretary of Defense for Acquisition, Technology and Logistics, develops DOD policy and provides guidance, oversight, and technical assistance on assessing or investing in defense industrial capabilities, and has oversight responsibility for the Manufacturing Technology program, among other programs, which develops technologies and processes that ensure the affordable and timely production and sustainment of defense systems, including additive manufacturing. In addition, OSD manages the Defense-wide Manufacturing Science and Technology program, which seeks to address cross-cutting initiatives that are beyond the scope of any one military service or defense agency. The Army, the Navy, the Air Force, and the Defense Logistics Agency each have their own manufacturing technology programs, which select and execute activities, such as additive manufacturing research activities. The Army, the Navy, and the Air Force have research and development laboratories—that is, U.S. Army Research, Development and Engineering Command; Office of Naval Research; and U.S. Air Force Research Laboratory—for projects on the use of new materials, processes, and applications for additive manufacturing. Army, Navy, and Air Force depots and arsenals use additive manufacturing to produce plastic parts and prototypes for tooling and repairs, such as dust caps for radios, to reduce costs and turnaround time. The Army Rapid Equipping Force will be reporting to the U.S. Army Training and Doctrine Command in October 2015, according to Army officials. It uses additive manufacturing to produce prototypes for repairs, such as tooling and fixtures, to reduce costs and turnaround time. Navy components, including the Office of the Chief of Naval Operations, Navy Business Office; the Naval Air Systems Command; and Naval Sea Systems Command, plan to use additive manufacturing to enable a dominant, adaptive, and innovative Naval force that is ready, able, and sustainable. According to Navy officials, in November 2013, the Chief of Naval Operations directed the Deputy Chief of Naval Operations for Fleet Readiness and Logistics to develop, de-conflict, and manage additive manufacturing efforts across the Navy. That office has since developed Navy’s 20-year additive manufacturing vision, according to Navy officials. The Defense Advanced Research Projects Agency Defense Sciences Office identifies and pursues high-risk, high-payoff fundamental research initiatives across a broad spectrum of science and engineering disciplines, and transforms these initiatives into radically new, game-changing technologies for U.S. national security. According to a senior Defense Advanced Research Projects Agency official, the agency has initiated the Open Manufacturing program, which allows officials to capture and understand the additive concepts, so that they can rapidly predict with high confidence how the finished part will perform. The program has two facilities—one at Pennsylvania State University and the other at the U.S. Army Research Laboratory—establishing permanent reference repositories and serving as testing centers to demonstrate applications of the technology being developed and as a catalyst to accelerate adoption of the technology. The Defense Logistics Agency procures parts for the military services and is developing a framework to determine how to use additive manufacturing, according to Defense Logistics Agency officials. The Walter Reed National Military Medical Center 3D Medical Applications Center is a military treatment facility that provides, among other things, computer-aided design and computer-aided manufacturing for producing medical models and custom implants through additive manufacturing. The Walter Reed National Military Medical Center falls within the National Capital Region Medical Directorate and is controlled by the Defense Health Agency, which in turn reports to the Assistant Secretary of Defense for Health Affairs. In addition to the contact named above, Marilyn Wasleski, Assistant Director; Dawn Godfrey; Richard Hung; Carol Petersen; Andrew Stavisky; Amie Steele; Sabrina Streagle; Sarah Veale; Angela Watson; Cheryl Weissman; and Alexander Welsh made key contributions to this report.
Additive manufacturing—building products layer-by-layer in a process often referred to as three-dimensional (3D) printing—has the potential to improve aspects of DOD's mission and operations. DOD and other organizations, such as America Makes, are determining how to address challenges to adopt this technology throughout the department. Senate Report 113-44 directed DOD to submit a briefing or report on additive manufacturing to the Senate Armed Services Committee that describes three elements. Senate Report 113-176 included a provision that GAO review DOD's use of additive manufacturing. This report addresses the extent to which (1) DOD's briefing to the Committee addresses the directed elements; (2) DOD has taken steps to implement additive manufacturing to improve performance, improve combat capability, and achieve cost savings; and (3) DOD uses mechanisms to coordinate and systematically track additive manufacturing efforts across the department. GAO reviewed and analyzed relevant DOD documents and interviewed DOD and academia officials. GAO determined that the Department of Defense's (DOD) May 2014 additive manufacturing briefing for the Senate Armed Services Committee addressed the three directed elements—namely, potential benefits and constraints; potential contributions to DOD mission; and transition of the technologies of the National Additive Manufacturing Innovation Institute (“America Makes,” a public-private partnership established to accelerate additive manufacturing) for DOD use. DOD has taken steps to implement additive manufacturing to improve performance and combat capability, and to achieve cost savings. GAO obtained information on multiple efforts being conducted across DOD components. DOD uses additive manufacturing for design and prototyping and for some production, such as parts for medical applications; and it is conducting research to determine how to use the technology for new applications. For example, according to a senior Air Force official, the Air Force is researching potential performance improvements that may be achieved by embedding devices such as antennas within helmets through additive manufacturing that could enable improved communications; and the Army used additive manufacturing to prototype aspects of a Joint Service Aircrew Mask to test a design change, and reported thousands of dollars thereby saved in design development (see figure). DOD uses various mechanisms to coordinate on additive manufacturing efforts, but it does not systematically track components' efforts department-wide. DOD components share information regarding additive manufacturing via mechanisms such as working groups and conferences that, according to DOD officials, provide opportunities to discuss challenges experienced in implementing additive manufacturing—for example, qualifying materials and certifying parts. However, DOD does not systematically track additive manufacturing efforts, to include (1) all activities performed and resources expended by DOD; and (2) results of these activities, including actual and potential performance and combat capability improvements, cost savings, and lessons learned. DOD has not designated a lead or focal point at a senior level to systematically track and disseminate the results of these efforts, including activities and lessons learned, department-wide. Without designating a lead to track information on additive manufacturing efforts, which is consistent with federal internal control standards, DOD officials may not obtain the information they need to leverage ongoing efforts. GAO recommends that DOD designate an Office of the Secretary of Defense lead to be responsible for developing and implementing an approach for systematically tracking department-wide activities and resources, and results of these activities; and for disseminating these results to facilitate adoption of the technology across the department. DOD concurred with the recommendation.
Congress established FHA in 1934 under the National Housing Act (P.L. 73- 479) to broaden homeownership, protect and sustain lending institutions, and stimulate employment in the building industry. FHA insures a variety of mortgages for initial home purchases, construction and rehabilitation, and refinancing. In fiscal year 2006, FHA insured almost 426,000 mortgages representing $55 billion in mortgage insurance. FHA’s single-family programs insure private lenders against losses from borrower defaults on mortgages that meet FHA criteria for properties with one to four housing units. FHA has played a particularly large role among minority, lower- income, and first-time homebuyers and generally is thought to promote stability in the market by ensuring the availability of mortgage credit in areas that may be underserved by the private sector or are experiencing economic downturns. In fiscal year 2006, 79 percent of FHA-insured home purchase loans went to first-time homebuyers, 31 percent of whom were minorities. FHA is a government mortgage insurer in a market that also includes private insurers. Generally, borrowers are required to purchase mortgage insurance when the loan-to-value (LTV) ratio—the ratio of the amount of the mortgage loan to the value of the home—exceeds 80 percent. Private mortgage insurance policies provide lenders coverage on a portion (generally 20 to 30 percent) of the mortgage balance. However, borrowers who have difficulty meeting down-payment and credit requirements for conventional loans may find it easier to qualify for a loan with FHA insurance, which covers 100 percent of the value of the loan. Because the credit risk is mitigated by the federal guaranty, FHA borrowers are allowed to make very low down payments and generally pay interest rates that are competitive with prime mortgages. Legislation sets certain standards for FHA-insured loans. FHA-insured borrowers are required to make a cash investment of a minimum of 3 percent. This investment may come from the borrowers’ own funds or from certain third-party sources. However, borrowers are permitted to finance their mortgage insurance premiums and some closing costs, which can create an effective LTV ratio of close to 100 percent for some FHA- insured loans. Congress also has set limits on the size of the loans that may be insured by FHA. These limits vary by county. The limit for an FHA- insured mortgage is 95 percent of the local median home price, not to exceed 87 percent or fall below 48 percent of the Freddie Mac conforming loan limit, which was $417,000 in 2006. Therefore, in 2006, FHA loan limits fell between a floor in low-cost areas of $200,160 and a ceiling in high-cost areas of $362,790. Eighty-two percent of counties nationwide had loan limits set at the low-cost floor, while 3 percent had limits set at the high- cost ceiling. The remaining 15 percent of counties had limits set between the floor and ceiling, at 95 percent of their local median home prices. FHA insures most of its single-family mortgages under its Mutual Mortgage Insurance Fund, which is supported by borrowers’ insurance premiums. FHA has the authority to establish and collect a single up-front premium in an amount not to exceed 2.25 percent of the amount of the original insured principal obligation of the mortgage, and annual premiums of up to 0.5 percent of the remaining insured principal balance, or 0.55 percent for borrowers with down payments of less than 5 percent. Currently, FHA uses a flat premium structure whereby all borrowers pay the same 1.5 percent up-front fee and a 0.5 percent annual fee. The Omnibus Budget Reconciliation Act of 1990 requires an annual independent actuarial review of the economic net worth and soundness of the Fund. The actuarial review estimates the economic value of the Fund as well as the capital ratio to see if the Fund has met the capital standards in the act. The analysis considers the historical performance of the existing loans in the Fund, projected future economic conditions, loss given claim rates, and projected mortgage originations. The Fund has met the capital ratio requirements since 1995, and the single-family mortgage insurance program has maintained a negative overall credit subsidy rate, meaning that the present value of estimated cash inflows from premiums and recoveries exceeds estimated cash outflows for claim payments (excluding administrative costs). However, in recent years, the subsidy rate has approached zero. A few single-family mortgage insurance programs are insured as obligations under either the General Insurance or Special Risk Insurance Funds. These programs are Section 203(k) rehabilitation mortgages, which enable borrowers to finance both the purchase (or refinancing) of a house and the cost of its rehabilitation through a single mortgage; Section 234(c) insurance for the purchase of a unit in a condominium building; and reverse mortgages under the Home Equity Conversion Mortgage (HECM) program, which can be used by homeowners age 62 and older to convert the equity in their home into a lump sum payment, monthly streams of income, or a line of credit to be repaid when they no longer occupy the home. Two major trends in the conventional mortgage market have significantly affected FHA. First, in recent years, members of the conventional mortgage market increasingly have been active in supporting low- and no- down-payment mortgages, increasing consumer choices for borrowers who may have previously chosen an FHA-insured loan. Subprime lenders, in particular, have offered mortgage products featuring flexible payment and interest options that allowed borrowers to qualify for mortgages despite a rise in home prices. Second, to help assess the default risk of borrowers, particularly those with high LTV ratios, the mortgage industry increasingly has used mortgage scoring and automated underwriting systems. Underwriting refers to a risk analysis that uses information collected during the origination process to decide whether to approve a loan, and automated underwriting refers to the process by which lenders enter information on potential borrowers into electronic systems that contain an evaluative formula, or algorithm, called a scorecard. The scorecard algorithm attempts to measure the borrower’s risk of default quickly and objectively by examining data such as application information and credit scores. (Credit scores assign a numeric value generally ranging from 300 to 850 to a borrower’s credit history, with higher values signifying better credit.) The scorecard compares these data with specific underwriting criteria (e.g., cash reserves and credit requirements) to predict the likelihood of default. Since 2004, FHA has used its own scorecard called Technology Open to Approved Lenders (TOTAL). FHA lenders now use TOTAL in conjunction with automated underwriting systems to determine the likelihood of default. Although TOTAL can determine the credit risk of a borrower, it does not reject a loan. FHA requires lenders to manually underwrite loans that are not accepted by TOTAL to determine if the loan should be accepted or rejected. Further, as we noted in a recent report, the share of home purchase mortgage loans insured by FHA has fallen dramatically, from 19 percent in 1996 to 6 percent in 2005, with almost all the decline occurring since 2001. The combination of (1) FHA product restrictions and a lack of process improvements relative to the conventional market and (2) product innovations and expanded loan origination and funding channels in the conventional market—coupled with interest rate and house price changes—provided conditions that favored conventional mortgages over FHA products. Conventional subprime loans, in particular, emerged as an alternative to FHA-insured mortgages but often at a higher ultimate cost to certain borrowers. At the same time, FHA’s financial performance has worsened. As we noted in a recent testimony, one reason for deteriorating loan performance has been the increase in FHA-insured loans with down-payment assistance from nonprofit organizations funded by home sellers. Down-payment assistance programs provide cash assistance to homebuyers who cannot afford to make the minimum down payment or pay the closing costs involved in obtaining a mortgage. From 2000 to 2006, the total proportion of FHA-insured home purchase loans with down-payment assistance from nonprofits (the large majority of which received funding from property sellers) increased from about 2 percent to approximately 33 percent. To help FHA adapt to recent trends in the mortgage market, in 2006 HUD submitted a legislative proposal to Congress that included changes that would adjust loan limits for the single-family mortgage insurance program, eliminate the requirement for a minimum down payment, and provide greater flexibility to FHA to set insurance premiums based on risk factors. HUD’s proposal, as it currently stands, reflects revisions made by the Expanding American Homeownership Act of 2006, which was passed by the House of Representatives in July 2006. Specifically, as shown in figure 1, the proposal would increase the loan limit for FHA-insured mortgages from 95 to 100 percent of the local median home price. It would also raise the loan limit floor in low-cost areas from 48 to 65 percent of the conforming loan limit, and the ceiling in high-cost areas from 87 to 100 percent of the conforming limit. The proposal would also repeal the 3 percent minimum cash investment requirement and allow FHA to set premiums commensurate with the risk of the loan. FHA would establish a premium structure allowing either a combination of upfront and annual premiums or annual premiums alone, subject to specified maximum amounts. In addition to these three major changes, the modernization proposal also contained other provisions, including: Permanently eliminating the limit on the number of HECM (reverse) mortgages that can be insured, setting a single nationwide loan limit for HECMs, and authorizing a HECM program for home purchases. Extending the permissible term of FHA-insured mortgages from 35 to 40 years. Moving HECMs, Section 203(k) rehabilitation mortgages, and Section 234(c) condominium unit mortgages from the General Insurance and Special Risk Insurance Funds to the Mutual Mortgage Insurance Fund. Moving the condominium program to the Fund would simplify the origination and underwriting process for these loans because they would no longer be subject to more complex requirements for multifamily housing loans. While FHA’s planning has reflected revisions made to its original proposal by the House of Representatives in the 109th Congress, new bills introduced in the 110th Congress could further affect FHA’s planning. FHA’s modernization efforts, which include completed administrative and proposed legislative changes, have streamlined the agency’s insurance processes and likely would affect program participation and costs. According to FHA and mortgage industry officials with whom we spoke, FHA’s recent administrative changes have resulted in efficiency improvements, making FHA products more attractive to use. FHA’s proposed legislation would grant the agency new leeway to help address challenges, such as adverse selection, resulting from innovations and increased competition in the mortgage market. If passed, the legislative changes likely would have a number of program and budgetary impacts. For example, we estimate that raising the FHA loan limits could increase demand for FHA-insured loans, all other things being equal. The risk-based pricing proposal would decrease premiums for lower-risk borrowers, increase them for higher-risk borrowers, and disqualify other potential borrowers. In addition, FHA estimates that the legislative proposals would have a favorable budgetary impact. FHA has taken a number of steps to make the loans it insures easier to process and bring the agency more in line with the conventional market. For example, in January 2006, FHA introduced the Lender Insurance Program, which enables higher-performing lenders to endorse all FHA loans except HECMs without a prior review by FHA. Prior to that time, all lenders were required to mail loan case files to FHA for review by contract staff before the loan could be endorsed for insurance. If the contractor found a problem with the case file, FHA would mail the file back to the lender for correction. Under the new program, approved lenders are allowed to perform their own pre-endorsement reviews and submit loan data electronically to FHA. If the loan data pass checks for accuracy and completeness, the lender is able to endorse the loan automatically. As of December 31, 2006, 405 (31 percent) of the 1,314 FHA lenders eligible for the program had been approved to participate. Between January 1, 2006, and December 31, 2006, 46 percent of FHA- insured loans were endorsed through the program. In addition to implementing the Lender Insurance Program, FHA revised its appraisal protocols and closing cost guidelines to align them more closely with conventional standards. Specifically, the agency simplified the appraisal process by adopting appraisal forms used in the conventional market and eliminating the requirement that minor property deficiencies be corrected prior to the sale of the property. Under the revised procedures, FHA limits required repairs to those necessary to protect the health and safety of the occupants, protect the security of the property, or correct physical deficiencies or conditions affecting structural integrity. Examples of property conditions that must be repaired include inadequate access to the exterior of the home from bedrooms, leaking roofs, and foundation damage. The agency requires the appraiser to identify minor property deficiencies (such as missing handrails, cracked window glass, and minor plumbing leaks) on the appraisal form, but no longer stipulates that they be repaired. These changes went into effect for all appraisals performed on or after January 1, 2006. In January 2006, FHA also eliminated its list of “allowable” and “non-allowable” closing costs and other fees that may be collected from the borrower. The agency made this change because FHA lenders had advised the agency that home sellers sometimes balked at accepting a sales contract from a homebuyer wishing to use FHA-insured financing because its guidelines differed from standard practice and did not consider regional variations. Lenders may now charge and collect from borrowers those customary and reasonable costs necessary to close the mortgage. According to FHA lenders and industry groups, these changes have increased the efficiency of loan processing, making FHA products more attractive to use. Representatives of a mortgage industry group told us that feedback from the group’s members on the Lender Insurance Program had been positive. Similarly, the FHA lenders we interviewed stated that the program had resulted in efficiency improvements, such as reduced processing times or costs. For example, one large FHA lender estimated that participating in the program had reduced the time it took to process an FHA-insured loan by about 35 percent (or 15 to 20 days). The same FHA lender also estimated that participation in the program had reduced the operating costs (mostly printing and shipping costs) for its FHA business by about 25 percent. Additionally, the FHA lenders we interviewed and representatives of a real estate industry group noted that FHA’s revised appraisal protocols and closing costs had made it easier to originate FHA loans. Representatives of the industry group noted that the revisions had shortened the time it took to close an FHA loan, which was important in a competitive market. Finally, the lenders we interviewed estimated that the administrative changes had contributed, at least in part, to recent modest increases in the number of FHA-insured loans they had made. According to FHA officials, the Lender Insurance Program also has reduced the time it takes FHA to process insurance endorsements and led to cost savings. They estimated that it takes FHA from 2 to 3 days to endorse applications for insurance on loans that are not part of the program. For loans endorsed through the program, they noted that approval is virtually instantaneous if the loan passes quality checks. In addition to reducing insurance processing times, the program has resulted in cost savings for FHA. During the first year of the program, FHA realized a reduction in contracting costs of more than $2 million, as its contractors were required to perform fewer pre-endorsement reviews. FHA also saved more than $70,000 in mailing costs during the first 9 months of the program. FHA estimates that contract costs will continue to decline as the program is expanded to include the HECM program. Our analysis indicates that raising FHA’s loan limits likely would increase the number of loans insured by FHA by making more loans eligible for FHA insurance. In some areas of the country, particularly in parts of California and the Northeast, median home prices have been well above FHA’s maximum loan limits, reducing the agency’s ability to serve borrowers in those markets. For example, the 2005 loan limit in high-cost areas was $312,895 for one-unit properties, while the median home price was about $399,000 in Boston, Massachusetts; about $432,000 in Newark, New Jersey; $500,000 in Salinas, California; and about $646,000 in San Francisco, California. If the limits were increased, FHA insurance would be available to a greater number of potential borrowers. Our analysis of HMDA data indicates that the agency could have insured from 9 to 10 percent more loans in 2005 had the higher mortgage limits been in place. The greatest portion of this increase resulted from raising the loan limit floor in low-cost areas from 48 to 65 percent of the conforming loan limit. In particular, 82 percent of the new loans that would have been insured by FHA and 74 percent of the dollar amount of those loans in our analysis occurred in areas where the loan limits were set at the floor. Only 14 percent of the new loans (22 percent of the dollar amount of new loans) would have resulted from increasing the loan limit ceiling. Our analysis also found that the average size of an FHA-insured loan in 2005 would have increased from approximately $123,000 to about $132,000 had the higher loan limits been in place. The effect of the other major legislative proposals on the demand for FHA- insured loans is difficult to estimate. Although FHA has not estimated the effect on demand, FHA officials expect that risk-based pricing would enable them to serve more borrowers. By reducing premiums for relatively lower-risk borrowers, FHA expects to attract more of these borrowers. However, increased premiums for higher-risk borrowers could reduce these borrowers’ demand for FHA products. Additionally, some high-risk borrowers who previously would have qualified for FHA insurance would not qualify under risk-based pricing. The effect of lowering down-payment requirements on demand for FHA-insured loans is also difficult to estimate. FHA expects a new zero-down-payment product to attract borrowers who otherwise would have used down-payment assistance from nonprofit organizations funded by home sellers. However, underwriting restrictions could limit the number of borrowers who would qualify for the product. Developments in the subprime market also may affect the demand for FHA loans. Since 2001, FHA’s share of the mortgage market has declined as the subprime market has grown. However, relatively high default and foreclosure rates for subprime loans and a contraction of this market segment could shift market share to FHA. For example, one major lender we interviewed said that FHA’s continued modernization efforts combined with a weakening subprime market likely would result in renewed demand for FHA products as simplified processes make it easier for lenders to originate FHA-insured loans. To help address the problem of adverse selection, FHA has sought authority to price insurance premiums based on borrower risk, which would affect the cost and availability of FHA insurance for some borrowers. Currently, all FHA-insured borrowers pay an up-front premium of 1.5 percent of the original insured loan amount, and annual premiums of 0.5 percent of the remaining insured principal balance. Under this flat pricing structure, lower-risk borrowers subsidize higher-risk borrowers. In recent years, innovations in the mortgage market have allowed conventional mortgage lenders and insurers to identify and approve relatively low-risk borrowers and charge fees based on default risk. As relatively lower-risk borrowers in FHA’s traditional market segment have selected conventional financing, FHA has been left with more high-risk borrowers who require a subsidy and fewer low-risk borrowers to provide that subsidy. Partly due to this trend, the President’s fiscal year 2008 budget stated that, in the absence of risk-based pricing, FHA would need to raise premiums to avoid the need for a positive subsidy. FHA officials told us that they would have to raise premiums for all borrowers to 1.66 percent up front and 0.55 percent annually. Raising premiums for all borrowers could exacerbate FHA’s adverse selection problem by causing even more lower-risk borrowers to opt for more competitive conventional products rather than FHA-insured loans, leaving FHA with even fewer lower-risk borrowers to subsidize higher-risk borrowers. Rather than raise premiums for all borrowers, FHA has proposed risk-based pricing as a solution to the adverse selection problem. Under risk-based pricing, some future FHA borrowers would pay more than the current premiums while others would pay about the same or less. As previously noted, discounting premiums could make FHA a more attractive option for relatively lower-risk borrowers. As of May 2007, FHA’s risk-based pricing proposal established six different risk categories, each with a different premium rate, for purchase and refinance loans. FHA used data from its most recent actuarial review to establish the six risk categories and corresponding premiums based on the relative performance of loans with various combinations of LTV ratio and credit score. Borrowers in categories with higher expected lifetime claim rates would have higher premiums than those in categories with lower claim rates. Premiums would range from 0.75 percent up front and 0.50 percent annually for the lowest-risk borrowers, to 3.00 percent up front and 0.75 percent annually for the highest-risk borrowers. Although the premiums that FHA would charge borrowers in the six risk categories would be more commensurate with the risks of the loans, lower-risk borrowers would continue to subsidize higher-risk borrowers to some extent. If FHA were granted the authority to implement its risk-based pricing proposal, the agency would publish a pricing matrix that would allow borrowers to identify their likely premiums based on their credit scores and LTV ratios. As shown in figure 2, lower borrower credit scores and higher LTV ratios would result in higher insurance premiums. However, FHA would use its TOTAL mortgage scorecard to make the final determination of a borrower’s placement in a particular risk category. While TOTAL takes into account more borrower and loan characteristics than LTV ratio and credit score (such as borrower reserves and payment- to-income ratio), it was designed to predict the probability of claims or defaults that would later result in claims within 4 years of loan origination rather than lifetime claim rates. Therefore, FHA rescaled the TOTAL scores to reflect lifetime claim rates. Because of the additional risk characteristics considered by TOTAL, a borrower’s TOTAL score could indicate that a borrower belongs in a higher risk category than would be suggested by LTV ratio and credit score alone. FHA has not produced a formal estimate of how often this would occur, but plans to include this caveat in its pricing matrix. Our analysis of how the proposed pricing structure would affect home purchase borrowers similar to those insured by FHA in 2005 found that approximately 43 percent of borrowers would have paid the same or less while 37 percent would have paid more. As discussed more fully later, 20 percent would not have qualified for FHA insurance had the risk-based pricing proposal been in effect. These percentages hold true whether comparing the proposed risk-based premiums to the current premiums of 1.5 percent up front and 0.5 percent annually or the higher premiums of 1.66 percent up front and 0.55 percent annually that, according to FHA, would be needed to maintain a negative subsidy rate in fiscal year 2008. As shown in figure 3, risk-based pricing would have had a similar impact on first-time and low-income homebuyers FHA served in 2005. Among FHA’s 2005 borrowers, 47 percent of white borrowers and 40 percent of Hispanic borrowers would have paid the same or less under the new proposed risk-based pricing structure than they did under the present pricing structure, while 28 percent of black borrowers would have paid the same or less. A little more than one-third of borrowers in each racial category would have paid more (see fig. 4). FHA officials concluded, in their analysis of an earlier version of the risk-based pricing proposal, that any disparate impacts of risk-based pricing using consumer credit scores would be based on valid business reasons. Specifically, they noted that, although some racial differences do exist in the distribution of credit scores and LTV ratios, these variables are strongly associated with claim rates and have become the primary risk factors used for pricing credit risk in the conventional market. Risk-based pricing would also affect the availability of FHA insurance for some borrowers. Approximately 20 percent of FHA’s 2005 borrowers would not have qualified for FHA mortgage insurance under the parameters of the risk-based pricing proposal we evaluated. FHA determined that the expected claim rates of these borrowers were higher than it found tolerable for either the borrower or the Fund. Those borrowers who would not have qualified had high LTV ratios and low credit scores. Their average credit score was 584, and their expected lifetime claim rates are more than 2.5 times higher than the average claim rate of all FHA loans. FHA officials stated that setting risk-based premiums for potential future FHA borrowers with similar characteristics would require prices higher than borrowers may be able to afford. The overall distribution of 2005 FHA borrowers (by income, first-time borrower status, or race) would not have changed substantially had the policy not to serve borrowers with these higher expected lifetime claim rates been in place that year (all other things being equal). If the 20 percent of borrowers with the higher expected claim rates were removed from FHA’s 2005 borrower pool, our analysis found that low-income homebuyers would have remained about 51 percent of the pool. First-time homebuyers would have constituted about 78 percent of the pool, compared with 79 percent when all borrowers are included. Similarly, the overall racial distribution of borrowers would have changed modestly (see fig. 5). The percentage of Hispanic borrowers would have remained about 14 percent, black borrowers would have decreased from 13 to 11 percent, and white borrowers would have increased from 69 to 70 percent. All other things being equal, implementing the legislative proposals likely would have had a slightly negative impact on FHA’s ability to meet certain performance measures related to the types of borrowers it serves. HUD’s strategic plan for fiscal years 2006 to 2011 calls for the share of first-time minority homebuyers among FHA home purchase mortgages to remain above 35 percent. Our analysis shows that 34 percent of fiscal year 2005 home purchase mortgages were for first-time minority home buyers. Under risk-based pricing, a slightly lower percentage, 32 percent, would have been first-time minority home buyers. The strategic plan also calls for the share of FHA-insured home purchase mortgages for first-time homebuyers to remain above 71 percent. Our analysis shows that 79 percent of fiscal year 2005 FHA home purchase borrowers were first- time home buyers. Under risk-based pricing, 77 percent would have been first-time home buyers. According to FHA’s estimates, the three major legislative proposals would have a beneficial impact on HUD’s budget due to higher estimated negative subsidies. According to the President’s fiscal year 2008 budget, the credit subsidy rate for the Fund would be more favorable if the legislative proposals were enacted. Absent any program changes, FHA estimates that the Fund would require an appropriation of credit subsidy budget authority of approximately $143 million. If the legislative proposals were not enacted, FHA would consider raising premiums to avoid the need for appropriations. If the major legislative proposals were passed, FHA estimates that the Fund would generate $342 million in negative subsidies. FHA’s subsidy estimates for fiscal year 2008 should be viewed with caution given that FHA has generally underestimated the subsidy costs for the Fund. To meet federal requirements, FHA annually reestimates subsidy costs for each loan cohort dating back to fiscal year 1992. The current reestimated subsidy costs for all except the fiscal year 1992 and 1993 cohorts are higher than the original estimates. For example, the current reestimated cost for the fiscal year 2006 cohort is about $800 million higher than originally estimated. As discussed more fully later in this report, FHA has taken some steps to improve its subsidy estimates. FHA has enhanced the tools and resources it uses that would be important to implementing the legislative proposals, but has not always used industry practices that could help the agency manage the risks associated with program changes. To implement risk-based pricing, FHA would rely on historical loan-level data, models that estimate loan performance, and its TOTAL mortgage scorecard. Although FHA has improved the forecasting ability of its models by adding variables found to influence credit risk, the agency is still addressing limitations in TOTAL that could reduce its effectiveness as a pricing tool. FHA also has identified changes in information systems needed to implement the legislative proposals and requested additional staff to help promote new FHA products but faces long-term challenges in these areas. However, the legislative proposals would introduce new risks and challenges such as the difficulty of pricing loans with very low or no down payments whose risks may not be well understood. While other mortgage institutions use pilot programs to manage the risks associated with changing or expanding their product lines, FHA has indicated that it does not plan to pilot any no-down- payment product it is authorized to offer. Mortgage institutions use detailed information on the characteristics and performance of past loans to help predict the performance of future loans and price them correctly. Like other mortgage institutions we contacted, FHA has extensive loan-level data. These data are contained in the agency’s SFDW, which FHA implemented in 1996 to assemble critical data from 12 single-family systems. SFDW is updated monthly and currently contains data on approximately 33 million FHA-insured loans dating back to fiscal year 1975. These data include information on the borrower (such as age, gender, race, income, and first-time home buyer status) and the loan (including whether it is an adjustable- or fixed-rate mortgage, the source and amount of any down-payment assistance, interest rate, premium rate, original mortgage amount, and LTV ratio). FHA has added information on borrower credit scores to the loan-level data that it plans to use to assess risk and set insurance premiums if the legislative proposals were enacted. Research has shown that credit scores are a strong predictor of loan performance—that is, borrowers with higher scores experience lower levels of default. FHA started collecting credit score data in the late 1990s when it began allowing its lenders to use automated underwriting systems and mortgage scorecards. Upon approving the use of Fannie Mae and Freddie Mac’s mortgage scorecards in fiscal year 1998, FHA began receiving credit score data for loans underwritten using these scoring tools. To develop its own mortgage scorecard, FHA purchased archived credit scoring data for loan origination samples dating back to 1992. Since implementing its TOTAL mortgage scorecard in May 2004, FHA has collected credit scores on almost all FHA borrowers. FHA would rely on both its loan performance models and TOTAL mortgage scorecard to set insurance premiums if authorized to implement risk-based pricing. Although FHA has improved the forecasting ability of its loan performance models by incorporating additional variables found to influence credit risk, FHA is still in the process of addressing a number of limitations in TOTAL that could reduce its effectiveness for risk-based pricing. The agency’s actuarial review contractor developed the loan performance models to estimate the economic value of the Fund for the annual actuarial review. The models estimate lifetime claim and prepayment (the payment of a loan before its maturity date) rates based on factors such as origination year, age, interest rate, mortgage product type, initial LTV ratio, and loan amount. FHA used the projected lifetime claim and prepayment rates from the most recent actuarial review as the basis for its proposed risk-based insurance premiums. FHA has improved its loan performance models by adding factors that have been found to influence credit risk. In September 2005, we reported that FHA’s subsidy reestimates, which use data from FHA’s loan performance models, reflect a consistent underestimation of the costs of its single-family insurance program. We recommended that FHA study and report the impact (on the forecasting ability of its loan performance models) of variables that have been found in other studies to influence credit risk, such as payment-to-income ratios, credit scores, and the presence of down-payment assistance. In response, HUD indicated that its contractor was considering the specific variables that we had recommended FHA include in its annual actuarial review of the Fund. The contractor subsequently incorporated the source of down-payment assistance in the fiscal year 2005 actuarial review and borrower credit scores in the fiscal year 2006 review. FHA also intends to use TOTAL to determine risk-based premiums, but we have identified weaknesses in the scorecard that could limit its effectiveness as a pricing tool. As previously noted, FHA plans to use TOTAL to make the final determination regarding premium rates if authorized to implement risk-based pricing. However, we reported in April 2006 that TOTAL excludes a number of important variables included in other mortgage scoring systems. For example, TOTAL does not distinguish between adjustable- and fixed-rate mortgages. However, adjustable-rate mortgages generally are considered to be higher risk than otherwise comparable fixed-rate mortgages because borrowers are subject to higher payments if interest rates rise. Unlike the mortgage scorecards of other institutions, TOTAL also does not include an indicator for property type (single-family detached homes or condominiums, for example). While currently a small component of FHA’s business, FHA expects that it would insure more condominium loans if the condominium program were moved to the Fund, as set forth in its legislative proposal. Additionally, TOTAL does not indicate the source of the down payment. We have reported that the source of a down payment is an important indicator of risk, and the use of down-payment assistance in the FHA program has grown substantially since 2000. Finally, our April 2006 report noted that the data used to develop TOTAL were not current and FHA had no plans to update the scorecard on a regular basis. Consistent with our recommendations concerning TOTAL, FHA developed policies and procedures that call for (1) an annual evaluation of the scorecard’s predictive ability, (2) testing of additional predictive variables to include in the scorecard, and (3) populating the scorecard with more recent loan performance data. An FHA contractor is helping the agency to implement these procedures and is scheduled to issue a final report on its work in August 2007. After receiving the contractor’s report, FHA will decide what changes to TOTAL are necessary. Because the magnitude of these changes has not yet been determined, FHA does not have a completion date for this effort. FHA officials indicated that they would initially implement risk-based pricing using the current version of TOTAL but would use the updated version when it became available. FHA has identified changes needed in its information technology to implement the legislative proposals. FHA has divided these changes into two phases. The first phase consists of simpler changes that it can make in the short term, such as revising the system used to originate FHA-insured loans to allow for down payments of less than 3 percent. FHA also would need to make other changes to the system to support the new loan limits, such as allowing the loan amount to equal 100 percent of the conforming loan limit in applicable areas. The second phase includes modifications to the computer programs that calculate the up-front and annual insurance premiums to reflect risk-based pricing and revisions related to the proposed changes to the HECM and condominium programs. FHA has not yet obtained some of the funding needed to make the technology changes and does not have estimates for how long it would take to complete all of the changes. In fiscal year 2006, the agency obligated $2.8 million of the $10.9 million it estimated was needed to make all anticipated changes. Specifically, FHA plans to use funds reprogrammed from HUD’s salaries and expense account and other available funds to complete the first phase of changes. FHA estimates that most of this work could be completed in a few months. The President’s fiscal year 2008 budget requests an additional $8.1 million to fund the second phase of changes needed to implement the legislative proposals. However, FHA officials told us that they did not have an implementation schedule for this phase and were waiting until the legislative proposals were approved and they had secured the funding to develop one. Although FHA officials indicated that they could implement the legislative proposals after making these minor information technology changes, they also told us that major systems changes and integration would be needed to bring FHA’s systems up to levels comparable with other mortgage institutions. Currently, over 40 systems support FHA’s single-family business activity. While a thorough evaluation of large-scale systems changes was outside the scope of our review, FHA has indicated that its systems are poorly integrated, expensive to maintain, and do not fully support the agency’s operations and business requirements. For example, the systems cannot easily share or provide critical information because they use different database platforms with varying capabilities; some of the older systems use an outdated programming language; and the creation of ad hoc systems that do not interface with other systems has resulted in duplicate data entry. However, FHA has limited resources to devote to the development of new systems for two main reasons. First, it has to compete with other divisions within HUD for information technology resources. Of the approximately $300 million that HUD has requested for information technology development and maintenance in fiscal year 2008, about 5 percent would be for FHA’s single-family operations. Second, FHA spends what resources it has primarily on systems maintenance. Of the $19 million that FHA has budgeted for single- family information technology in fiscal year 2007, FHA officials estimate that $15 million would be devoted to systems maintenance. In contrast with FHA, officials from other mortgage institutions with whom we spoke indicated that they devote substantial resources to developing new systems and enhancing existing systems that help them price products and manage risk. To illustrate, officials from one mortgage institution stated that they had a $15 million annual budget for capital improvements in information technology. Officials from another mortgage institution told us that 17 percent of the company’s total expenses were related to information technology and that they recently spent about $15 million to develop a new system to price a mortgage product for the foreign market. These and other mortgage industry officials stressed that investments in state-of-the-art information systems were critical to operating successfully in the highly competitive mortgage market. According to FHA officials, the legislative proposals would not fundamentally alter how the agency administers its single-family mortgage insurance program and, therefore, would not require major increases in staff above the approximately 950 single-family housing employees it had as of March 2007. Although implementing the legislative proposals would require considerable program analysis and monitoring, much of the analysis required to develop the proposals was performed primarily by staff from FHA’s Offices of Finance and Budget and Single Family Housing with assistance from several contractors, who will continue to support the implementation. FHA officials told us that marketing any new products authorized and explaining program changes to lenders would be their next major challenge if the legislative proposals were passed. They also noted that successful implementation would require them to stay abreast of developments in the mortgage market. Therefore, the President’s fiscal year 2008 budget requests an additional 21 full-time equivalent (FTE) positions to help promote new FHA products, analyze industry trends, and align the agency’s single-family business processes with current mortgage industry practices. Although a detailed assessment of FHA’s staffing needs was outside the scope of our review, a HUD contractor’s 2004 workforce analysis suggests that FHA faces broader challenges that could affect the agency’s operations going forward. The analysis projected that FHA would have 78 fewer FTEs than needed to handle anticipated work demands by fiscal year 2008, assuming hires and transfers equal to the average numbers for 2001 through 2003. In addition to anticipated FTE shortfalls, the report also identified existing and projected deficits of FHA staff with certain important competencies such as technical credibility and knowledge of single-family programs, policies, and regulations. For example, the consultant projected a difference of 28 percentage points between the percentage of staff requiring technical credibility and the percentage that would meet this requirement in fiscal year 2008. FHA officials have acknowledged the agency’s staffing challenges and have developed plans to address the projected gaps. In fiscal years 2005 and 2006, FHA gained 228 staff through hiring or transfers. However, the contractor had assumed gains of 362 staff during those years, which means that the projected fiscal year 2008 shortfall will be worse than originally estimated without substantial staff accessions in fiscal years 2007 and 2008. FHA also faces hiring and salary constraints that other mortgage institutions do not. FHA’s hiring authority is limited by statute and congressional appropriations. Federal statute (Title 5 of the U.S. Code) restricts the amounts that FHA can pay staff, and each year’s appropriation determines how many staff it can hire. Further, FHA must compete with other divisions within HUD for staffing resources and may not always receive its full request. Other mortgage institutions have greater flexibility in their ability to hire and compensate staff. For example, Fannie Mae and Freddie Mac are not subject to federal pay and hiring restrictions. These restrictions create challenges for FHA as it competes for qualified staff in the competitive mortgage labor market. Although FHA has not always utilized risk-management practices that other mortgage institutions use, it plans to take some steps to help address the new risks and challenges associated with the legislative proposals. In November 2005, we reported that HUD needed to take additional actions to manage risks related to the approximately one-third of its loans with down-payment assistance from seller-funded nonprofits. Unlike other mortgage industry participants, FHA does not restrict homebuyers’ use of such assistance. Our 2005 analysis found that the probability that these loans would result in an insurance claim was 76 percent higher than for comparable loans without such assistance, and we recommended that FHA revise its underwriting standards to consider such assistance as a seller contribution (which cannot be used to meet the borrower contribution requirement). Despite the detrimental impact of these loans on the Fund, FHA did not act promptly to mitigate the problem by adjusting underwriting standards or using its existing authority to raise premiums. However, in May 2007, FHA published a proposed rule that would prohibit seller-funded down-payment assistance. In addition, as we reported in February 2005, other mortgage institutions limit the availability of or pilot new products to manage risks associated with changing or expanding product lines. We have previously indicated that, if Congress authorizes FHA to insure new products, it should consider a number of means, including limiting their initial availability, to mitigate the additional risks these loans may pose. We also recommended that FHA consider similar steps for any new or revised products. However, in response, FHA officials told us that they lacked the resources to effectively manage a program with limited volumes. We noted that if FHA did not limit the availability of new or changed products, the potential costs of making widely available a product with risks that may not be well understood could exceed the cost of a pilot program. With respect to its legislative proposal, FHA officials told us that they do not plan to pilot or limit the initial availability of any zero-down-payment product the agency was authorized to offer. They also indicated that they expected that a zero- down-payment product would perform similarly to loans with seller- funded down-payment assistance. While the experience of loans with this type of assistance is informative, a zero-down-payment product could be utilized by a different population of borrowers and may not perform the same as these loans. Nevertheless, if the legislative proposals were to be enacted, FHA plans to take some steps to help address risks and challenges associated with (1) managing the risks of no-down-payment loans, (2) setting premiums to achieve a modestly negative subsidy rate, and (3) modifying oversight of lenders. First, loans with low or no down payments carry greater risk because of the direct relationship that exists between the amount of equity borrowers have in their homes and the risk of default. The higher the LTV ratio, the less cash borrowers will have invested in their homes and the more likely it is that they may default on mortgage obligations, especially during times of economic hardship or price depreciation in the housing market. No-down-payment loans became common in the conventional market when rapid appreciation in home prices helped mitigate the risk of these loans. However, if authorized to offer a zero-down-payment mortgage in the near future, FHA would be introducing this product at a time when home prices have stagnated or are declining in some parts of the country. And because FHA would continue to allow borrowers to finance some portion of closing costs and up-front insurance premiums, the effective LTV ratio for loans with very low or no down payments could be greater than 100 percent, further increasing FHA’s insurance risk. To mitigate the risks associated with loans with no down payments, FHA plans to impose stricter underwriting criteria for such loans: FHA would limit the amount of up-front premium and closing costs that could be financed; therefore, all borrowers would be making some minimum cash contribution. FHA plans to require a minimum credit score of 640 to obtain FHA insurance on loans with no down payments. FHA would limit its zero-down-payment product to loans for owner- occupied, one-unit properties. Second, FHA’s legislative proposal would fundamentally change the way the agency manages the Fund in that FHA would set premiums to achieve a modestly negative overall subsidy rate, representing the weighted average of the subsidy rates for the different risk-based pricing categories. The President’s budget for fiscal year 2008 estimates that the weighted average subsidy rate would be -0.6 percent (meaning that the Fund would generate negative subsidies amounting to 0.6 percent of the total dollars insured for loans originated that year). Achieving a modestly negative credit subsidy rate would depend on FHA’s ability to price new products whose risks may not be well understood, although risk-based pricing could help FHA be more precise in setting and adjusting premiums for different segments of its portfolio. FHA officials told us that they would monitor the proportion of loans in its two highest-risk categories and consider raising premiums or tightening underwriting standards if unexpectedly high demand exposed FHA to excessive financial risk. Fannie Mae, Freddie Mac, and the four private mortgage insurers we interviewed noted that they carefully monitor their portfolios to make sure that they do not have too many loans in any given risk category and take similar steps when they determine that this is the case. Third, FHA may need to modify the way that it oversees lenders if the legislative proposals were enacted. FHA has indicated that its legislative proposals would help the agency to expand service to higher-risk borrowers in a financially sound manner. However, FHA may need to revise its Credit Watch program if it is to achieve this end. Under Credit Watch, FHA terminates the loan origination authority of any lender branch office that has a default and claim rate on mortgages insured by FHA in the prior 24 months that exceeded both the national average and 200 percent of the average rate for lenders in its geographic area. Because termination currently is based on how a lender’s loans perform relative to other lenders in its geographic area, lenders that chose to make loans to higher- risk borrowers could suffer in comparison with lenders that served only lower-risk borrowers. To encourage lenders to serve borrowers in the higher-risk categories, FHA officials told us that they would consider taking into account the mix of borrowers in the various risk categories when evaluating a lender’s performance. Because higher-risk loans can be expected to incur higher default and claim rates, they stated that FHA would not want to penalize lenders with larger shares of these loans as long as the loans were performing within expected risk parameters. FHA also has improved the accuracy and timeliness of the loan performance data it uses to evaluate lenders by requiring lenders to update the delinquency status of their loans more frequently. Mortgage industry participants and researchers have suggested additional options that Congress and FHA could consider to help FHA adapt to changes in the mortgage market, but some changes could have budget and oversight implications. FHA already has authority to undertake some of these options. Other options would require additional authorities from Congress to increase the agency’s operational flexibility. Congress also could consider alternative approaches to the provision of federal mortgage insurance such as converting FHA to a government corporation or implementing risk-sharing arrangements with private partners. Although FHA already has made several administrative changes to streamline the agency’s insurance processes, additional administrative changes within FHA’s existing authority could alleviate, to some extent, the need for a positive subsidy in fiscal year 2008. More specifically, FHA could exercise its existing authority to raise up-front premiums up to 2.25 percent and, for borrowers with down payments of less than 5 percent, annual premiums to 0.55 percent. To moderate the need for a positive subsidy in fiscal year 2008, FHA could use its existing authority to increase premiums in one of three ways: (1) FHA could raise premiums for all borrowers, as the President’s fiscal year 2008 budget suggests will be necessary; (2) FHA could charge the higher 0.55 percent annual premium to borrowers with lower down payments; or (3) FHA could implement a more limited form of risk-based pricing than it has proposed by adjusting premiums within the current statutory limits. HUD’s Office of General Counsel determined in March 2006 that FHA has the authority to structure premiums for programs under the Fund on the basis of risk. FHA could implement premium adjustments, either for all or some borrowers, through the regulation process. However, according to FHA officials, the current statutory limits on premiums are too low to allow FHA to implement a risk-based pricing plan that would allow the agency to set prices high enough to compensate for the expected losses from the highest-risk borrowers or a new zero-down-payment product. And while raising premiums for some higher-risk borrowers could improve the Fund’s credit subsidy rate, raising premiums for all borrowers might exacerbate FHA’s adverse selection problem. That is, FHA could lose higher credit quality borrowers, resulting in fewer borrowers to subsidize lower credit quality borrowers. This, in turn, could require FHA to raise premiums again. According to mortgage industry participants and researchers, Congress also could consider granting FHA additional authorities to increase the agency’s ability to invest in technology and staff or offer new insurance products. First, Congress could grant FHA specific authority to invest a portion of the Fund’s current resources—that is, negative subsidies that accrue in the Fund’s reserves—in technology enhancement. The congressionally-appointed Millennial Housing Commission (MHC) found that FHA’s dependence on the appropriations process for budgetary resources and competition for funds within HUD had led to under- investment in technology, increasing the agency’s operational risk and making it difficult for FHA to work efficiently with lenders and other industry partners. Because FHA’s single-family insurance program historically has generated estimated negative subsidies, FHA and some mortgage industry officials have suggested that the agency be given the authority to use a portion of the Fund’s current resources to upgrade and maintain its technology. One benefit of this option is that the technology enhancements could improve FHA’s operations. As previously noted, FHA has more than 40 single-family information systems that are poorly integrated, expensive to maintain, and do not fully support the agency’s business requirements. However, according to FHA, the option would require a statutory change to allow FHA to use the Fund’s current resources to pay for technology improvements. Also, the Fund is required by law to operate on an actuarially sound basis. Because the soundness of the Fund is measured by an estimate of its economic value—an estimate that is subject to inherent uncertainty and professional judgment—the Fund’s current resources should be used with caution. Spending the Fund’s current resources would lower the Fund’s reserves, which in turn would lower the economic value of the Fund. As a result, the Fund’s ability to withstand severe economic conditions could be diminished. Also, using the Fund’s current resources would increase the federal budget deficit unless accompanied by corresponding reductions in other government spending or an increase in receipts. Second, Congress could consider allowing FHA to manage its employees outside of federal pay scales. Some federal agencies, such as the Securities and Exchange Commission, the Office of Thrift Supervision, and the Federal Deposit Insurance Corporation, are permitted to pay salaries above normal federal pay scales in recognition of the special skills demanded by sophisticated financial market operations. The MHC and mortgage industry officials have suggested that FHA be given similar authority. This option could help FHA to recruit experienced staff to help the agency adapt to market changes. Like the authority to invest in technological enhancement, this option could be funded with the Fund’s current resources but would have similar implications for the financial health of the Fund and the federal budget deficit. Third, Congress could authorize FHA to offer and pilot new insurance products without prior congressional approval. A variety of new mortgage products have appeared in the mortgage market in recent years, but FHA’s ability to keep pace with market innovations is limited. For example, the MHC found that the statutes and regulations to which FHA is subject dramatically increase the time necessary to develop and implement new products. The MHC and mortgage industry officials have recommended that Congress expressly authorize FHA to introduce new products without requiring a new statute for each. Such authority would offer FHA greater flexibility to keep pace with a rapidly changing mortgage market. However, Congress would have less control over FHA’s product offerings and, in some cases, it might take years before a new product’s risks were well understood. To manage the risks of new products, mortgage institutions may impose limits on the volume of the new products they will permit and on who can sell and service those products. Limits on the availability of new or revised FHA mortgage insurance products are sometimes set through legislation and focus on the volume of loans that FHA may insure. In a prior report, we recommended that FHA consider using pilots for new products and making significant changes to its existing products. Since FHA officials questioned the circumstances in which they could use pilots or limit volumes when not required by Congress, we also recommended that FHA seek the authority to offer new products on a limited basis, such as through pilots, if the agency determines it currently lacks sufficient authority. However, FHA has not sought this authority. Furthermore, while piloting could help FHA manage the risks associated with implementing new products, FHA officials told us that they lack the resources to manage a program with limited volumes effectively. Finally, Congress could authorize FHA to insure less than 100 percent of the value of the loans it guarantees. Unlike private mortgage insurers, which offer several levels of insurance coverage up to a maximum of 40 or 42 percent (depending on the company) of the value of the loan, FHA insures 100 percent of the value of the loan. But since most FHA insurance claims are offset by some degree of loss recovery, some mortgage industry observers have suggested that covering 100 percent of the value of the loan may not be necessary. In prior work, we examined the potential effects of reducing FHA’s insurance coverage and found that while lower coverage would cause a reduction in the volume of FHA-insured loans and a corresponding decline in income from premiums, it would also result in reduced losses and ultimately have a beneficial effect on the Fund. However, we also noted that partial FHA coverage could lessen FHA’s ability to stabilize local housing markets when regional economies decline and may increase the cost of FHA-insured loans as lenders set higher prices to cover their risk. The MHC, HUD officials, and other mortgage industry participants have suggested alternative approaches to provide federal mortgage insurance in a changing mortgage market. First, since the mid-1990s, several groups including HUD and the MHC have proposed converting FHA into either an independent or a HUD-owned government corporation—that is, an agency of government, established by Congress to perform a public purpose, which provides a market-oriented service and produces revenue that meets or approximates its expenditures. Government corporations operate more independently than other agencies of government and can be exempted from executive branch budgetary regulations and personnel and compensation ceilings. Therefore, converting FHA to a corporation could provide the corporation’s managers with the flexibility to determine the best ways to meet policy goals set by Congress or HUD. This option could have budgetary and oversight implications that would need to be considered when setting up the new corporation. For example, Congress would have to determine the extent to which (1) the corporation’s earnings in excess of those needed for operations and reserves would be available for other government activities and (2) the corporation would be subject to federal budget requirements. Also, if the corporation were created outside of HUD, Congress would have to consider whether oversight of the corporation would require a new oversight institution or could be performed by an existing organization. Alternatively, rather than maintaining all the functions of a mortgage insurer within a government entity, the MHC and private mortgage insurers have suggested that the federal government could provide mortgage insurance through risk-sharing agreements with private partners. FHA already works with partners to conduct various activities related to its operations. For example, FHA has delegated underwriting authority to approved lenders, and contractors perform many day-to-day activities (such as marketing foreclosed properties) that once were performed by FHA employees. A public-private risk-sharing arrangement would recognize that government has a better ability to spread risk, while private mortgage industry participants generally are more flexible and responsive to market pressures and better able to innovate and adopt new technologies quickly. There are many different possible ways to structure a risk-sharing approach, with variables such as the amount of insurance coverage provided, the number and type of risk-sharing partners, the degree of risk accepted by each partner, and the roles and responsibilities of the partners. Whatever the structure, a risk-sharing approach could result in greater efficiency and allow FHA to reach new borrowers through new partner channels. However, risk sharing also could diminish the federal government’s ability to stabilize markets if private partners lacked incentive to serve markets where economic conditions were deteriorating. Additionally, implementing risk-sharing arrangements might require more specialized expertise than FHA currently has among its staff. For example, careful analysis in both program design and monitoring would be needed to ensure that FHA’s financial interests were adequately protected. Finally, Congress and FHA could elect to make no changes at this time and allow the private market to play the definitive role in determining the future need for federal mortgage insurance. The recent decline in FHA’s market share occurred at a time when interest rates were low, house price appreciation was high, and mortgage credit was widely available. However, changes in the mortgage market, such as higher interest rates and stricter underwriting standards for subprime loans, may lead to an increasing role for FHA in the future or at least a continued role for the federal government in guaranteeing mortgage credit for some borrowers. Therefore, even if Congress and FHA were to make no changes at this time, FHA’s market share might increase due to the recent change in market conditions. Or it might eventually become so small as to indicate that there is no longer a need for a federal role in providing mortgage insurance. If FHA’s market share continues to decline to such a level, FHA might be eliminated or critical functions reassigned to maintain a minimal federal role in guaranteeing mortgage credit. Making no changes to FHA at this time would acknowledge the substantial role the private market now plays in meeting the mortgage credit needs of borrowers. However, some home buyers might find it more difficult and more costly to obtain mortgages if FHA were eliminated or its functions reduced and reassigned to another federal agency. And allowing FHA to become too small could impact the federal government’s ability to play a role in stabilizing mortgage markets during an economic downturn. Also, any option that might lead to the eventual elimination of FHA’s single- family mortgage insurance program would have broader implications for FHA and its other programs, such as the multifamily mortgage insurance and regulatory programs, which this report does not address. Such implications would, therefore, require further study. Recent trends in the mortgage market, including the prevalence of low- and no-down-payment mortgages and increased competition from conventional mortgage and insurance providers, have posed challenges for FHA. FHA’s market share has declined substantially over the years, and what was a negative subsidy rate for the single-family insurance program has crept toward zero. To adapt to market changes, FHA has implemented new administrative procedures and proposed legislation designed to modernize its mortgage insurance processes, introduce product changes, and provide additional risk-management tools. To its credit, FHA has performed considerable analysis to support its legislative proposal and has made or planned enhancements to many of the specific tools and resources that would be important to its implementation. However, the proposals present risks and challenges and should be viewed with caution for several reasons. First, FHA has not always effectively managed risks associated with product changes, most notably the growth in the proportion of FHA-insured loans with seller-funded down-payment assistance. In that case, FHA did not use the risk-management tools already at its disposal to mitigate adverse loan performance that has had a detrimental impact on the Fund. Second, the proposal to lower down- payment requirements potentially to zero raises concerns given the greater default risk of loans with high LTVs, policies that could result in effective LTV ratios of over 100 percent, and housing market conditions that could put borrowers with such loans in a negative equity position. Sound management of very low or no-down-payment products would be necessary to help ensure that FHA and borrowers do not experience financial losses. Piloting or otherwise limiting the availability of new products would allow FHA the time to learn more about the performance of these loans and could help avoid unanticipated insurance claims. Despite the potential benefits of this practice, FHA generally has not implemented pilots, unless directed to do so by Congress. We have previously indicated that, if Congress authorizes FHA to insure new products, Congress and FHA should consider a number of means, including limiting their initial availability, to mitigate the additional risks these loans may pose. We continue to believe that piloting would be a prudent approach to introducing the products authorized by FHA’s legislative proposal. Finally, FHA would face the challenge of setting risk- based premiums—potentially for products whose risks may not be well understood—to achieve a specific financial outcome, a relatively small negative subsidy. Because the estimated subsidy rate is close to zero and FHA has consistently underestimated its subsidy costs, FHA runs some risk of missing its target and requiring a positive subsidy. Additionally, limitations we have identified in FHA’s TOTAL scorecard, which would be a key tool used in risk-based pricing, could reduce the agency’s ability to set prices commensurate with the risk of the loans. Accordingly, it will be important for FHA to continue making progress in addressing these limitations. Our recent report on trends in FHA’s market share underscores the challenges that FHA has faced in adapting to the changing mortgage market. For example, we noted that FHA’s share of the market for home purchase mortgages has declined precipitously since 2001 due in part to FHA product restrictions and a lack of process improvements relative to the conventional market. While FHA has taken some steps to improve its processes and enhance the tools and resources that it would use to implement the modernization proposals, additional changes may be necessary for FHA to operate successfully in the long run in a competitive and dynamic mortgage market. Other mortgage industry participants have greater flexibility to hire and compensate staff, invest in information technology, and introduce new products, enhancing their ability to adapt to market changes and manage risk. A number of policy options that go beyond FHA’s modernization proposals would give FHA similar flexibility but would have other implications that would require careful deliberation. We provided HUD with a draft of this report for review and comment. HUD provided comments in a letter from the Assistant Secretary for Housing-Federal Housing Commissioner (see app. II). HUD said that the draft report provided a balanced assessment but also that the report’s concerns about FHA’s risk management and emphasis on the need for piloting lower-down-payment products were unwarranted. HUD said that it welcomed the draft report’s acknowledgement of FHA’s improvements in program administration and risk management but questioned the report’s concerns about FHA’s ability to understand and manage risk. HUD indicated that its proposal to diversify FHA’s product offerings and pricing structure grew out of recognition that FHA was subject to adverse selection, as evidenced by the loss of borrowers with better credit profiles and growth in seller-funded down-payment assistance loans. In addition, HUD listed steps it had taken to curtail seller- funded down-payment assistance, including publishing a proposed rule in May 2007 that would effectively eliminate seller-funded down-payment assistance in conjunction with FHA-insured loans. Our draft report cited a number of improvements in FHA’s risk management, such as enhancements to its loan performance models. However, we continue to believe that our concerns about FHA’s ability to manage risk are warranted. As our draft report noted, FHA did not take prompt action to mitigate the adverse financial impact of loans with seller-funded down- payment assistance. Furthermore, our draft report identified additional steps, such as improvements to TOTAL scorecard, that would help address the risks and challenges associated with the legislative proposals. With regard to piloting, HUD said that pilot programs are appropriate where a concept is untested but that the concept of zero- or lower-down- payments was well understood. HUD indicated that it had a firm basis for anticipating the performance of zero- and lower-down-payment loans as a result of its experience with mortgages with seller-funded down-payment assistance. HUD said it used this experience to establish risk-based insurance premiums and minimum credit scores for zero- and lower-down- payment borrowers. Additionally, HUD said that it had recently started to collect 30-day and 60-day delinquency data, giving the agency the capability to track performance trends for different segments of its loan portfolio on a monthly basis. HUD stated that, for these reasons, the risks of zero- or lower-down-payment loans were sufficiently well known or knowable to not warrant a pilot program. As our draft report noted, we previously have reported that other mortgage institutions limit the availability of, or pilot, new products to manage the risks associated with changing or expanding their product lines and have recommended that FHA consider adopting this practice. Our draft report also acknowledged that FHA’s experience with seller- funded down-payment assistance could inform assessment of how a zero- down-payment product would perform. However, we continue to believe that FHA should consider limiting the availability of a loan product with no down payment. In particular, our draft report discussed two factors that indicate the need for caution in introducing such a product. First, a zero-down-payment product could be utilized by a different population of borrowers than seller-funded down-payment assistance loans and may not perform similarly to these loans. Second, zero-down-payment loans became common in the conventional mortgage market when rapid appreciation in home prices helped mitigate the risks of these loans. If authorized to offer a zero-down-payment product in the near future, FHA would be introducing it at a time when home prices have stagnated or are declining in some parts of the country. Because of these risks and uncertainties, we continue to believe that a prudent way to introduce a zero-down-payment product would be to limit its initial availability such as through a pilot program. We are sending copies of this report to the Chairman, Senate Committee on Banking, Housing, and Urban Affairs; Chairman and Ranking Member, Subcommittee on Housing and Transportation, Senate Committee on Banking, Housing, and Urban Affairs; Chairman and Ranking Member, House Committee on Financial Services; and Chairman and Ranking Member, Subcommittee on Housing and Community Opportunity, House Committee on Financial Services. We will also send copies to the Secretary of Housing and Urban Development and to other interested parties and make copies available to others upon request. In addition, the report will be made available at no charge on the GAO Web site at http://www.gao.gov. Please contact me at (202) 512-8678 or [email protected] if you or your staff have any questions about this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. The Ranking Member of the Senate Committee on Banking, Housing, and Urban Affairs and Senator Wayne Allard requested that we evaluate FHA’s modernization efforts, which include administrative and proposed legislative changes. Specifically, we examined (1) the likely program and budgetary impacts of FHA’s modernization efforts, (2) the tools, resources, and risk-management practices important to FHA’s implementation of the legislative proposals, if passed, and (3) other options that FHA and Congress could consider to help FHA adapt to changes in the mortgage market and the pros and cons of these options. To determine the likely program and budgetary impacts of FHA’s modernization efforts, we reviewed FHA guidance on three administrative changes implemented in 2006: the Lender Insurance Program and revisions to the agency’s appraisal protocols and closing cost guidelines. To determine the extent to which these administrative changes have affected the processing of FHA-insured loans, we interviewed representatives of Countrywide Financial, Wells Fargo, Bank of America, and Lenders One (a mortgage co-operative representing about 90 independent mortgage bankers). We selected Countrywide Financial and Wells Fargo because they are large FHA lenders, Bank of America because it had recently decided to grow its FHA business, and Lenders One because some of its members make FHA loans. We also interviewed representatives of three mortgage and real estate industry groups— Mortgage Bankers Association, National Association of Realtors, and National Association of Home Builders. To determine how the Lender Insurance Program has affected the processing of FHA insurance, we interviewed FHA officials and obtained documentation from them on the extent of lender participation in the program and its effect on insurance processing time and costs. In evaluating the likely program impacts of FHA’s proposed legislative changes, we focused on the proposals to raise FHA loan limits, institute risk-based pricing of mortgage insurance premiums, and lower down- payment requirements. To examine the effect of raising loan limits on demand for FHA-insured loans, we analyzed 2005 HMDA data (the most current available). Specifically, we analyzed the home purchase loans recorded in 2005 to determine the number of loans in each of 380 core based statistical areas (CBSA) and used that data to calculate FHA’s market share in each CBSA. (These 380 CBSAs were those for which we had data and included one aggregate “nonmetro” category.) We then determined the number of additional loans that, based on their loan amounts, would have been eligible for FHA insurance in 2005 had the higher proposed loan limits been in effect. Finally, we estimated the percentage of the newly-eligible loans in each CBSA that FHA would have insured using the following range of assumptions: (1) that FHA’s market share would have been approximately the same as it was among all loans in that CBSA under the actual 2005 loans limits, (2) that FHA’s market share would have been approximately the same as its share of loans with loan amounts ranging from 70 to 100 percent of the actual 2005 loan limits in that CBSA, (3) that FHA’s market share would have been approximately the same as its share of loans with loan amounts ranging from 75 to 100 percent of the actual 2005 loan limits in that CBSA, and (4) that FHA’s market share would be approximately the same as its share of loans with loan amounts ranging from 80 to 100 percent of the actual 2005 loan limits in that CBSA. For each of these four scenarios, we calculated the total number and dollar amount of new loans across all 380 CBSAs that could have been insured by FHA had the higher loan limits been in effect. All four assumptions yielded similar results. After arriving at an estimate of an overall increase in the number of FHA-insured loans, we then determined the proportions of the increase that would have resulted from raising the loan limit floor in low-cost areas, raising the loan limit ceiling in high-cost areas, or raising the limits in areas that fell between the floor and the ceiling. Finally, we calculated the average FHA-insured loan amount in 2005, as well as the average loan amount that FHA might have insured had the loan limits been increased. We assessed the reliability of the HMDA data we used by reviewing information about the data, performing electronic data testing to detect errors in completeness and reasonableness, and interviewing a knowledgeable official regarding the quality of the data. We determined that the data were sufficiently reliable for the purposes of this report. To estimate the effects of risk-based pricing on borrowers’ eligibility for FHA insurance and the premiums they would pay, we reviewed FHA’s risk-based pricing proposal and interviewed FHA officials regarding their plans to implement risk-based pricing, if authorized. We then analyzed SFDW data on FHA’s 2005 home purchase borrowers to determine how they would have been affected by FHA’s risk-based pricing proposal. (We focused on 2005 borrowers because that was the most recent year for which we had complete data, and we restricted our analysis to purchase loans because they comprise the bulk of FHA’s business.) First, we assigned borrowers to one of seven categories (FHA’s six proposed risk- based pricing categories and one category for those who would not have been eligible for FHA insurance) based upon their LTV ratio and credit score. Since FHA does not currently insure loans without a down payment, we identified borrowers with down-payment assistance and determined the source and amount of assistance to approximate borrowers with LTV ratios of 100 percent. We recalculated the LTV ratio of their loans by adding the amount of their assistance to the principal balance of their loan. We then examined the demographic characteristics (race, income, and first-time home buyer status) of borrowers in each of the six pricing categories, as well as those borrowers who would no longer qualify for FHA insurance. We assessed the reliability of the SFDW data we used by reviewing information about the system and performing electronic data testing to detect errors in completeness and reasonableness. We determined that the data were sufficiently reliable for the purposes of this report. We also interviewed representatives of the following consumer advocacy groups to obtain their views on FHA’s proposed legislative changes: Center for Responsible Lending, Consumer Action, Consumer Federation of America, National Association of Consumer Advocates, National Community Reinvestment Coalition, National Consumer Law Center, and National Council of La Raza. We examined the potential budgetary impacts of the legislative proposals by reviewing the President’s fiscal year 2008 budget and FHA cost estimates as shown in the 2008 Federal Credit Supplement. (The Federal Credit Supplement provides summary information about federal direct loan and loan guarantee programs, including current subsidy rates and reestimated subsidy rates.) To determine the tools, resources, and risk-management practices important to FHA’s implementation of the legislative proposals, we interviewed and reviewed documentation from FHA officials regarding the agency’s plans for implementing the legislative proposals, if passed. We focused on completed and planned enhancements to FHA’s SFDW data, loan performance models, TOTAL mortgage scorecard, information technology, human capital, and risk-management practices. To help us evaluate the need for enhancements to FHA’s tools, resources, and practices, we followed up on our past work on (1) FHA’s development and use of TOTAL, (2) FHA’s estimation of subsidy costs for its single-family insurance program, (3) practices that could be instructive for FHA in managing the risks of new mortgage products, and (4) FHA’s management of loans with down-payment assistance. To obtain information on the tools and resources that other mortgage institutions use to set prices and manage risk, we interviewed Fannie Mae, Freddie Mac, the Mortgage Insurance Companies of America (the industry group that represents the private mortgage insurance industry), and four private mortgage insurance companies—AIG United Guaranty, Genworth Mortgage Insurance Company, Mortgage Guaranty Insurance Corporation, and PMI Mortgage Insurance Company. To determine other options that FHA and Congress could consider and the pros and cons of these options, we reviewed relevant literature, including the report of the Millennial Housing Commission, articles discussing past FHA restructuring proposals, and our past work on various options for FHA. We also interviewed FHA officials, academic experts, FHA lenders, and private mortgage insurance companies. We conducted this work in Washington, D.C., from September 2006 to June 2007 in accordance with generally accepted government auditing standards. In addition, Steve Westley (Assistant Director), Steve Brown, Laurie Latuda, John McGrail, Barbara Roesmann, Paige Smith, and Richard Vagnoni made key contributions to this report.
In recent years, the Federal Housing Administration (FHA) has experienced a sharp decline in market share. Also, the agency has estimated that, absent program changes, its Mutual Mortgage Insurance Fund (Fund) would require appropriations in 2008. To adapt to market changes, FHA has implemented new procedures and proposed the following major legislative changes: raising FHA's loan limits, allowing risk-based pricing, and lowering down-payment requirements. GAO was asked to report on (1) the likely program and budget impacts of FHA's modernization efforts; (2) the tools, resources, and risk management practices important to FHA's implementation of the legislative proposals, if passed; and (3) other options that FHA and Congress could consider to help FHA adapt to market changes. To address these objectives, GAO analyzed FHA and Home Mortgage Disclosure Act (HMDA) data and interviewed officials from FHA and other mortgage institutions. FHA's recent changes to insurance approval and appraisal requirements have streamlined its insurance process, and FHA's major legislative proposals could affect the demand for FHA's loans, the cost and availability of insurance to borrowers, and the insurance program's budgetary costs. Based on GAO's analysis of HMDA data, the number of FHA-insured loans could have been from 9 to 10 percent greater in 2005 had the higher, proposed mortgage limits been in effect. GAO's analysis of data on 2005 FHA home purchase borrowers shows that 43 percent would have paid the same or less under the risk-based pricing proposal than they actually paid, 37 percent would have paid more, and 20 percent (those with the highest expected claim rates) would not have qualified for FHA insurance. While to be viewed with caution, FHA has made estimates indicating that the loans it expects to insure in 2008 would result in negative subsidies (i.e., net cash inflows) of $342 million if the major legislative changes were enacted, rather than requiring an appropriation of $143 million absent any program changes. FHA has taken or planned steps to enhance tools and resources and adopt risk-management practices important to implementing the legislative proposals, but does not intend to use a common industry practice, piloting, to mitigate the risks of any zero-down-payment product it is authorized to offer. In response to prior GAO recommendations, FHA has taken steps to improve the loan performance and scoring models it would use in risk-based pricing. It also has identified minor changes to its information systems and staff increases needed to implement the proposals but faces long-term challenges in these areas. Additionally, the legislative proposals would introduce new risks. The proposal to lower down-payment requirements is of particular concern given the higher default rates on these loans and the difficulty of setting prices for new products whose risks may not be well known. GAO has previously indicated that Congress may want to consider requiring FHA to limit the initial availability of any new products and also recommended that FHA itself consider piloting. However, FHA has indicated that it does not plan to pilot any no-down-payment product it might offer. Mortgage industry participants and researchers have suggested more options that Congress and FHA could consider to help FHA adapt to changes in the mortgage market, but some changes could have budget impacts and complicate oversight efforts. Some administrative changes--such as implementing a more limited form of risk-based pricing--are within FHA's existing authority. Congress also could grant FHA additional authority that would allow it to invest the Fund's current resources in information technology and human capital, but this would increase the federal government's budget deficit. Finally, Congress could contemplate other approaches to the provision of federal mortgage insurance, such as creating a government corporation. However, any fundamental changes to how the federal government provides mortgage insurance could require new oversight mechanisms and would require careful deliberation.
The federal government is facing several significant challenges when it comes to its acquisition workforce: the number of workers is declining, while the workload and the demand for more sophisticated technical, financial, and management skills are increasing. DOD’s contracting workload, for example, has increased by about 12 percent in recent years, but the workforce available to perform that workload has been reduced by about half over the same period. Meanwhile, the federal government is implementing various ways of contracting, such as performance-based contracting methods, commercial-based pricing approaches, and the use of purchase cards. High-performing public organizations have found that strategic planning and management can address human capital shortfalls. Strategic human capital planning begins with establishing a clear set of organizational intents, including a clearly defined mission, core values, goals and objectives, and strategies, and then integrating a human capital approach to support these strategic and programmatic goals. It requires systematic assessments of current and future human capital needs and strategies— which encompass a broad array of initiatives to attract, retain, develop, and motivate a top quality workforce—to fill the gaps. To ensure lasting success, the top leaders of an organization need a sustained commitment to embracing human capital management. They need to see people as vital assets to organizational success and must invest in this valuable asset. While many organizations have developed models for workforce planning,putting aside variations in terminology, the models share the following common elements. They identify organizational objectives; identify the workforce competencies needed to achieve the objectives; analyze the present workforce to determine its competencies; compare present workforce competencies to those needed in the future (sometimes referred to as a “gap analysis”); develop plans to transition from the present workforce to the future periodically evaluate the workforce plans, review the mission and objectives to assure they remain valid, and make adjustments as required by changes in mission, objectives, and workforce competencies. This process is simple in concept, but it can be difficult to carry out. First, it requires a shift in the human resource function from a support role to a role that is integral to accomplishing an agency’s mission. Second, it requires developing accurate information on the numbers and locations of employees and their competencies and skills, data on the profile of the workforce, and performance goals and measures for human capital approaches. We have previously reported that agencies may find that they lack some of the basic tools and information to develop strategic plans, such as accurate and complete information on workforce characteristics and strategic planning expertise. Four organizations—the Office of Personnel Management (OPM), the Office of Federal Procurement Policy (OFPP), the Procurement Executives Council (PEC), and the Federal Acquisition Institute (FAI)— have roles to play in dealing with workforce and acquisition workforce issues. Highlights of these different roles are presented in table 1. All six agencies that we reviewed have published or drafted human capital strategic plans for their overall workforces and are taking actions specifically targeted at strengthening their acquisition workforces. Three agencies are developing specific acquisition workforce plans. Agencies are in varying stages of these efforts. The agencies are facing challenges in completing workforce plans—in particular, they are finding it difficult to predict and respond to future needs given the rapid pace of change occurring within acquisition and the lack of reliable data on workforce characteristics. Agencies are also hampered by difficulty in sharing information about best practices and lessons learned in addressing acquisition workforce issues. In developing strategic plans for their overall workforces, all six of the agencies we reviewed have identified their organizational objectives. Three of these, DOE, HHS, and Treasury, have gone as far as conducting a gap analysis, which involves comparing present workforce competencies to those that will be needed in the future. Some agencies are developing these plans at an agencywide level, while others are developing them at a bureau or operating division level. Four agencies included in our review—VA, GSA, DOE, and NASA—believe that the acquisition function is central to accomplishing their missions. There are clear reasons for this. About 90 percent of NASA’s funds, for example, is spent on contracts for projects such as the international space station and the space shuttle. DOE contracts out about 94 percent of its budget. VA purchases goods and services, such as medical supplies, pharmaceuticals, and information technology. And GSA’s primary function is to assist federal agencies in procuring goods and services. Recognizing the importance of acquisition to their missions, VA, GSA, and DOE are all developing or have developed strategic plans specifically targeted at strengthening their acquisition workforce. NASA is developing an overall workforce plan that will include the acquisition workforce. VA and GSA have defined the objectives for their future acquisition workforces. GSA has also established the competencies that workforce will need and has begun its gap analysis. DOE has studied its acquisition workforce, identified competencies and gaps, and is now implementing actions it believes are needed to strengthen the acquisition workforce. NASA is in the process of identifying the competencies its workforce possesses. All four of these agencies have also developed training and career development programs that are aimed at ensuring their acquisition workforces have the skills to accomplish the agencies’ missions. Treasury and HHS view acquisition as critical to mission success. However, unlike GSA, for example, acquisition is not a primary function of these agencies. Each agency spends less than 25 percent of its budget on acquisitions. Nevertheless, Treasury and HHS have undertaken initiatives such as training, career development, and intern programs to ensure that their acquisition workforces have the necessary skills and training to accomplish their missions. Tables 2 and 3 highlight progress being made by the agencies we studied. Detailed information on each agency’s efforts is provided at the end of this section. Major challenges facing the agencies we reviewed were difficulty in forecasting their missions in the future because of shifting priorities and budgets and difficulty in predicting the characteristics that the future workforce will need. Also, acquisition rules and regulations are changing, making it difficult for agencies to predict what will be required of their acquisition workforce in years to come. Officials at DOE said that given the dynamic nature of the agency’s mission focus and budget direction, forecasting the future represented a formidable challenge. Officials at HHS also noted that improving the focus on the agency’s mission and the skills sets needed to accomplish the mission was their biggest challenge. Officials at VA told us that they are still trying to determine how the department would be conducting its acquisitions in the future, and therefore they could not yet predict the kind of acquisition workforce VA would need. Compounding the uncertainty of the future environment is the changing role of the acquisition professional from merely a purchaser or process manager to a business manager. Uncertainty is also caused by an increased focus on performance and outcomes, which requires greater integration of functions such as acquisition, financial management, and program management. In order to make this transition, acquisition workers will need to acquire an entirely new set of skills and knowledge, according to the agency officials with whom we spoke. For example, in addition to having a firm understanding of contracting rules and processes, acquisition workers will need to be adept at consulting and communicating with line managers, and they will need to be able to analyze business problems, identify different alternatives in purchasing goods or services, and assist in developing strategies in the early stages of the acquisition. Finally, a deeper understanding of market conditions, industry trends, and the technical details of the commodities and services being procured will be required. Another challenge for agencies is the lack of data on the characteristics of the current workforce (e.g., size of workforce; deployment across the organization; knowledge, skills and abilities; attrition rates; retirement rates; etc.). NASA and VA are developing their own management information systems to capture this data. In addition, the FAI is developing a management information system, called the Acquisition Career Management Information System, to help agencies and departments collect and maintain standardized data on their acquisition workforces. The director of the FAI stated that the system is expected to be operational by January 2003. An additional challenge cited by some agency procurement officials is the lack of a means to share information among agencies about best practices or lessons learned in dealing with acquisition workforce issues. One potential mechanism for providing such leadership is the PEC, which was created to provide a senior-level forum for monitoring and improving the federal acquisition system. The OFPP Administrator currently serves as the chair of the council. The council established an Acquisition Workforce Committee in 1999 to focus on the changing role of the acquisition workforce and to identify methods and strategies to equip this workforce with the knowledge, skills, and abilities to successfully meet the challenges of change. According to the committee chair, the council has only recently recognized that it needs to take a leadership role in coordinating agencies’ efforts to strengthen the acquisition workforce. However, assuming this leadership role will present its own challenges. For example, the PEC has yet to reach a consensus on how best to fulfill this role, in part because of the difficulty in finding common ground among several federal agencies with different agendas and missions. In addition, agencies currently lack formal mechanisms for sharing information about best practices or lessons learned on dealing with acquisition workforce issues. The Acquisition Workforce Committee had chartered working groups to research acquisition workforce needs, establish a governmentwide Acquisition Management Intern Program, develop retention strategies and incentives, and determine the ideal skills and characteristics of the future acquisition professional. According to the chair of the committee, although some of these initiatives, such as the intern program, have been successful, the effort to develop a broader governmentwide approach to building and implementing a model for the future acquisition workforce has been slow because the PEC has been realigning itself and redefining its strategic initiatives to support the President’s Management Agenda and to respond to issues related to homeland security. The following information provides details of the civilian agencies’ efforts to address acquisition workforce issues. The check marks in the Status section indicate each agency’s progress in developing strategic plans for its overall workforce and for its acquisition workforce, if applicable. General Services Administration Size and Role of Acquisition Workforce As the government’s primary procurement arm, GSA’s role is to assist agencies in procuring supplies and services, office space, equipment, telecommunications, and information technology. The GSA acquisition workforce comprises about 2,950 personnel out of a total of about 14,000. GSA considers its primary acquisition workforce to include contract specialists (GS 1102), procurement clerks (GS 1106), purchasing specialists (GS 1105), property disposal agents (GS 1104), contracting officers, and contracting officer representatives /contracting officer technical representatives. In the near future, GSA will expand this definition to include program managers. Condition of Acquisition Workforce GSA sees its acquisition workforce as integral to accomplishing its mission. To keep up the trend toward purchasing highly complex and technical goods and services, GSA will need its acquisition workforce to build knowledge on market conditions, industry trends, and the technical details of the commodities and services being acquired. GSA also envisions broadening the knowledge base of acquisition professional beyond the procurement field into areas such as budget, finance, and program management. A little over 26 percent of the acquisition workforce will be eligible to retire by 2007. Status of Overall Workforce Strategic Plans ✔ Agency has published/drafted human capital strategic plan ✔ Defined vision/objectives Status of Acquisition Workforce Strategic Plans ✔ Separate human capital strategic plan for acquisition workforce✔ Defined vision/objectives ✔ Identified competencies needed Efforts GSA has established the Office of Acquisition Workforce Transformation to foster the development of the acquisition workforce. Among other things, the office is responsible for developing a succession plan, developing and implementing recruitment programs, and developing and managing education/training standards and data. GSA has identified acquisition as one of its mission-critical occupations, and has established the competencies needed by the acquisition workforce. However, GSA currently does not know whether its acquisition workforce has the requisite competencies. Therefore, it has established the Applied Learning Center to measure whether the acquisition workforce has the competencies to carry out its duties successfully. The pilot project will begin in 2003 and will be completed that calendar year. The results of the pilot will provide an indication of the skills gaps in GSA’s acquisition workforce. As a part of its ongoing strategy to address the skills gaps identified, GSA has also established an Education and Training Center to provide the needed training. Challenges GSA currently tracks its acquisition workforce data manually and maintains it in a database. GSA will migrate this data to the Acquisition Career Management Information System when it comes on line in the January 2003 timeframe. National Aeronautics and Space Administration Size and Role of Acquisition Workforce Out of a total of about 18,000 employees, approximately 680 comprise NASA’s acquisition workforce. NASA contracts out about 90 percent of its budget; it spent about $12.7 billion in fiscal year 2001. The acquisition function is essential because NASA is a research and development (R&D) agency, and the ability to achieve its mission is dependent on the acquisition function of awarding R&D contracts. NASA’s missions are: to advance and communicate scientific knowledge and understanding of the Earth, the solar system and the universe; to advance human exploration, use, and development of space; and to research, develop, verify, and transfer advanced aeronautics and space technologies. NASA includes contract specialists (GS 1102), purchasing specialists (GS 1105), contracting officers, and procurement clerks in its acquisition workforce. Condition of Acquisition Workforce Since 1993, the acquisition workforce has been reduced more than 30 percent, from about 1,000 in fiscal year 1993 to about 680 in fiscal year 2002. By the end of 2007, another 27 percent of the remaining acquisition workforce will be eligible for retirement. However, NASA does not perceive a crisis in its acquisition workforce because of current hiring and an emphasis on an intern program that is expected to continue to bring in new acquisition employees. Also, NASA does not anticipate a big shift in the role of its acquisition workforce because the goods and services it purchases are not likely to change. Status of Overall Workforce Strategic Plans ✔ Agency has published/drafted human capital strategic plan ✔ Defined vision/objectives ✔ Identified competencies needed Status of Acquisition Workforce Strategic Plans □ Identified competencies present □ Gap analysis □ Transition plans □ Evaluate/adjust □ Separate human capital strategic plan for acquisition workforce □ Defined vision/objectives □ Identified competencies needed □ Identified competencies present □ Gap analysis □ Transition plans □ Evaluate/adjust Efforts Currently, each of the NASA Enterprises and Centers is responsible for identifying the workforce size and skills that it needs to accomplish its mission, but NASA recognizes that it has limited capability for personnel tracking and planning. To address this issue, it is developing an agencywide workforce planning system that will allow better management of the existing workforce and enable better strategic decisions about future workforce needs. The system will track the distribution of workforce across programs, personnel critical skills, and personnel management experience, and will permit NASA to identify gaps between skills required and skills available. NASA officials responsible for developing the system said that it could be used to determine and predict gaps in the acquisition workforce. NASA hopes to have the system implemented agencywide by September 2003. NASA’s Office of Procurement has three initiatives to address entry-level, mid-level and senior-level staff development needs: NASA’s Contracting Intern Program ensures a pipeline of well-trained, college-educated candidates to offset demographic trends; NASA’s Career Development and Procurement Certification Programs ensure that acquisition professionals receive training that meets or exceeds statutory requirements; and NASA’s Rotational Assignments with Industry provide senior acquisition professionals with corporate experience and the tools needed to assume acquisition management and other leadership positions. Department of Energy Size and Role of Acquisition Workforce DOE has about 14,100 federal employees, with a contracting workforce of 464. The contracting workforce includes contracting officers and contract specialists (GS 1102), purchasing specialists (GS 1105), and other series with significant acquisition responsibilities assigned to DOE procurement offices. DOE contracts out about 94 percent of its budget, using a widespread network of contractors. In fiscal year 2001, DOE spent approximately $18.6 billion on contracts. The department manages an extensive array of energy programs over a nationwide complex that includes headquarters organizations, operations offices, field offices, national laboratories, power marketing administrations, special purpose offices, and sites now dedicated to environmental cleanup. With over 100,000 contractor employees who manage approximately 50 major installations across the county, acquisition is critical to accomplishing the department’s mission. In addition to the series listed above, the DOE acquisition workforce includes procurement clerks (GS 1106), project/program managers, property managers, financial assistance specialists, and contracting officer representatives. Condition of Acquisition Workforce In fiscal year 1995, DOE began a 5-year period of downsizing. During this period, it essentially stopped hiring. As a result, the average age of the DOE workforce increased. In 1998, the DOE procurement executive conducted a demographic study of the acquisition workforce because of concerns that 4 years of downsizing had created potential short- and long-term problems regarding the ability of the workforce to meet future needs. The study found that DOE was likely to lose its acquisition leadership because of retirements and therefore needed to develop leadership skills in the remaining workforce. In addition, DOE’s assessment of the acquisition environment identified education and developmental needs in project/program management, property management, financial assistance, and contractor human resource management. A survey conducted in 2001 showed that the department would continue to face the same issues as revealed by the 1998 study. In response to the 1998 study, DOE initiated its Acquisition Career Development Program to address the gaps identified. The program is designed to ensure that the department will have sufficient numbers of personnel with adequate education and training to perform the acquisition mission. Status of Overall Workforce Strategic Plans ✔ Agency has published/drafted human capital strategic plan ✔ Defined vision/objectives ✔ Identified competencies needed ✔ Identified competencies present ✔ Gap analysis ✔ Transition plans Status of Acquisition Workforce Strategic Plans ✔ Separate human capital strategic plan for acquisition workforce ✔ Defined vision/objectives ✔ Identified competencies needed ✔ Identified competencies present ✔ Gap analysis ✔ Transition plans Efforts The elements of the Acquisition Career Development Program include an intern program, a training and certification program, and a program to develop future leaders of the acquisition workforce by providing educational and experiential opportunities. This program includes course work in acquisition-related areas, rotational assignments with industry, attendance at a leadership institute, and a developmental assignment as Acting Director at Headquarters. Challenges Some of the challenges cited by DOE officials included the difficulty of forecasting the mission of the agency in an environment of shifting budgets and priorities, the lack of lower-level (i.e., below office director level) management support for workforce planning efforts, and the lack of funding and resources to implement developmental programs. Department of Veterans Affairs Size and Role of Acquisition Workforce The VA sees its acquisition workforce as integral part to accomplishing its mission. The acquisition workforce of 6,000 represents about 2.5 percent of the total workforce of 240,000. The acquisition workforce’s primary role is to purchase pharmaceuticals, medical-surgical supplies, prosthetic devices, information technology, construction, and services for America’s veterans and their families. VA spent about $5.9 billion on contracts in fiscal year 2001, which represented about 12 percent of its budget. The acquisition workforce includes contract specialists (GS 1102), purchasing specialists (GS 1105), contracting officers, contracting officer representatives, contracting officer technical representatives, and other acquisition-related positions such as program managers and procurement clerks. Condition of Acquisition Workforce The Secretary of Veterans Affairs established a Procurement Reform Task Force in June 2001 to review VA's acquisition system and develop specific recommendations for optimizing the system. The task force found that the acquisition workforce is in a vulnerable position because the nature of its work is changing rapidly, requiring broader competencies and more complex skill sets. In addition, it found an increased need for employees with higher educational levels, general management proficiency, and the ability to leverage information technology. The task force also recognized that a critically high number of VA’s acquisition employees are eligible for retirement. Status of Overall Workforce Strategic Plans ✔ Agency has published/drafted human capital strategic plan ✔ Defined vision/objectives Status of Acquisition Workforce Strategic Plans ✔ Separate human capital strategic plan for acquisition workforce ✔ Defined vision/objectives Efforts The procurement reform task force proposed a workforce development strategy consisting of several initiatives that would ensure a sufficient and talented acquisition workforce. However, the task force report noted that the implementing a strategic plan for the acquisition workforce would bind these initiatives together and ensure that the workforce is managed as a single entity, rather than as a loose collection of related occupations. VA is in the early stages of developing a strategic workforce plan for its acquisition workforce and is in the process of implementing some of the task force’s recommendations. For example, it has implemented the Center for Acquisition and Materiel Management Education On-line (CAMEO), a centralized management information system to capture data on the training and education of its acquisition workforce. This data will help identify the skills and competencies the acquisition workforce has currently. VA acquisition personnel began populating the CAMEO database in January 2002. In addition to serving as a database, CAMEO provides on-line training. VA’s first on-line training course became available to its acquisition workforce in December 2001. VA develops and provides training programs and courses following the curriculum established by the FAI. VA also conducts continuing education sessions tailored to the nonmanagerial and managerial members of the acquisition workforce. Challenges While the task force report articulated a broad vision for the acquisition workforce, VA is trying to identify the specific skills and competencies the acquisition workforce currently has and what will be needed in the future. VA does not have a centralized database with complete and accurate data that will enable it to identify the skills and competencies for its current workforce. Because the VA is in the process of changing its acquisition practices and processes, it cannot yet predict precisely what kind of workforce will be needed. Department of Treasury Size and Role of Acquisition Workforce Treasury’s acquisition workforce provides a support function for the department’s 15 bureaus. The Treasury acquisition workforce of 640 represents less than 1 percent of the total workforce of 134,577. The total of 134,577 does not include seasonal workers. Treasury does not plan to develop an acquisition workforce plan since it does not identify the acquisition workforce as a challenge in accomplishing its mission. Treasury’s acquisition workforce includes contract specialists (GS 1102), purchasing agents (GS 1105), and procurement clerks (GS 1106). Condition of Acquisition Workforce Historical data indicate that Treasury GS 1102s have a low attrition rate of 13 percent, which is balanced by a one-for-one new hire ratio of 12.9 percent. About 22 percent of the GS 1102s will be eligible to retire in 2004, with the percentage rising to 44 in 2009. However, an October 2001 Workforce Planning Report by the Treasury Deputy Assistant Secretary for Human Resources cites OPM data that indicate most federal employees wait 3 years past their eligibility date to actually retire. In light of the above data, the department has not identified the acquisition workforce as a management challenge. However, Treasury has recognized that the role of the acquisition workforce is evolving from simply purchasing to that of business advisor as the government procurement environment changes. Status of Overall Workforce Strategic Plans ✔ Agency has published/drafted human capital strategic plan ✔ Defined vision/objectives ✔ Identified competencies needed ✔ Identified competencies present ✔ Gap analysis Status of Acquisition Workforce Strategic Plans □ Transition plans □ Evaluate/adjust □ Separate human capital strategic plan for acquisition workforce □ Defined vision/objectives □ Identified competencies needed □ Identified competencies present □ Gap analysis □ Transition plans □ Evaluate/adjust Efforts Treasury is implementing initiatives to ensure that the acquisition workforce does have the skills and competencies needed currently and in the future. For example, the agency has established the Treasury Acquisition Institute, which offers a curriculum to meet the needs of its acquisition workforce. Besides procurement, the institute offers courses in interpersonal communication and computer capabilities, as well as courses in project management, competitive sourcing, and leadership. The institute and the office of the Treasury Procurement Executive also conduct nontraditional training such as procurement conferences and other procurement training as needed. Treasury has established a Treasury Procurement Intern Program to recruit hire and train new contract specialists, an Acquisition/Business Career Management Program and a Fulfillment Program. Treasury officials stated that the department is actively participating with the FAI to develop and establish a standard set of skills and competencies that may be used governmentwide. FAI planned to implement the set of skills and competencies by late 2002. Challenges Treasury officials noted that the lack of a standardized, governmentwide set of skills and competencies for the future acquisition workforce made it difficult to assess the current workforce. Department of Health and Human Services Size and Role of Acquisition Workforce The acquisition workforce is considered a mission support activity that provides assistance to the 11 operating divisions to accomplish their mission of protecting the health of all Americans and providing essential human services, particularly for those least able to help themselves. The HHS acquisition workforce of 963 makes up 1.5 percent of the total HHS workforce of 64,836. In fiscal year 2001, the agency spent about $6.2 billion on federal contracts, which represented about 1 percent of its total budget. The acquisition workforce includes contracting officers (GS 1102), purchasing agents (GS 1105), and procurement technicians. Condition of Acquisition Workforce About 15 percent of the acquisition workforce is currently eligible to retire. However, according to HHS officials, this percentage is not out of line with the HHS workforce as a whole. In addition, neither retirements nor overall attrition among this workforce has shown itself to be a problem in recent years. Consequently, HHS does not view the acquisition workforce as a management challenge. In terms of the future acquisition workforce, HHS, like other agencies, envisions its acquisition workforce evolving into business managers. Status of Overall Workforce Strategic Plans ✔ Agency has published/drafted human capital strategic plan ✔ Defined vision/objectives ✔ Identified competencies needed ✔ Identified competencies present ✔ Gap analysis Status of Acquisition Workforce Strategic Plans □ Transition plans □ Evaluate/adjust □ Separate human capital strategic plan for acquisition workforce □ Defined vision/objectives □ Identified competencies needed □ Identified competencies present □ Gap analysis □ Transition plans □ Evaluate/adjust Efforts HHS and its operating divisions have developed human capital plans for ensuring that the overall workforce has the skills needed to manage their programs. HHS has implemented initiatives such as the HHS Emerging Leaders program and a training program for its acquisition workforce. The department has also participated in the governmentwide Acquisition Management Intern Program. These initiatives are aimed at ensuring that the acquisition workforce will have the skills and competencies to accomplish the agency’s mission and evolve into the business managers/advisors that will be needed in the future. Challenges HHS officials said they faced the following challenges in trying to address their future acquisition workforce needs: the lack of standardized equivalencies for acquisition training courses taken at other government agencies to help determine skill levels and competencies, a lack of data to identify/characterize the workforce, and a need to improve focus on the agency mission and develop competencies for effective acquisitions to support that mission. DOD has been working for several years to strengthen its civilian acquisition workforce. The acquisition workforce comprises a large proportion of the overall workforce, and DOD views the acquisition workforce as critical to accomplishing its mission. DOD has analyzed its current workforce and made projections for the future. But in doing so, it recognized that implementing a strategic approach to reshaping the workforce involves substantial challenges. The overriding challenge for DOD was the need to overcome cultural resistance to the strategic approach and build a solid foundation for planning, which DOD recognized could take years to accomplish. The civilian agencies we studied may face some of the same challenges as they press forward with their own planning efforts. The specific lessons learned from DOD’s efforts to address its challenges are highlighted in table 4. During the past decade, DOD has downsized its civilian acquisition workforce by half. It now faces what it considers to be serious imbalances in the skills and experience of its remaining workforce and the potential loss of highly specialized knowledge if many of its acquisition specialists retire. DOD created the Acquisition 2005 Task Force to study this problem and develop a strategy to replenish personnel losses. The task force’s first recommendation was to develop and implement a human capital strategic plan for the civilian acquisition workforce. In response to this recommendation, DOD components undertook a strategic planning effort in 2001 in tandem with an array of other initiatives aimed at strengthening the acquisition workforce, including personnel demonstration projects and new recruiting and new training initiatives. In its first strategic planning cycle, DOD engaged a consultant to provide training on the workforce planning process, which took about 2 days, and then set out to develop the plans. According to DOD officials, despite encountering problems during the first cycle, the effort was useful in that the components had begun to think strategically about their workforce. However, the officials recognized that the results were imperfect. For example, none of the initial plans submitted by DOD’s components contained a complete analysis of potential gaps for the civilian acquisition workforce. The components attributed this problem to deficiencies in the first attempt at the planning process. Specifically, due to the time constraints and the timing of the process, the components lacked sufficient planning guidance from the Office of the Secretary of Defense, such as the Defense Planning Guidance and the Quadrennial Defense Review, which had not yet been issued. In addition, inadequate modeling capability made the process less than optimum. Furthermore, the output was hampered somewhat by inconsistent accuracy of personnel data. DOD still found that the first cycle provided a valuable experience because it highlighted the key planning barriers that needed to be overcome. In addition to a lack of specific guidance, data, and modeling tools, other barriers included ad hoc policy decisions, cultural resistance to workforce planning, limited strategic workforce planning expertise, and the lack of an institutional structure to support strategic workforce planning. DOD also recognized that overcoming these barriers would not be easy because they require DOD to acquire new systems and tools and to make a cultural shift from viewing human capital as a support function to a mission function. As figure 1 illustrates, DOD now estimates that it will take as long as 5 years to mature the human capital strategic planning process. Several specific lessons learned from DOD’s experience are highlighted below. An overriding lesson learned from DOD has been that making the cultural shift from viewing human capital as a support function to a mission function requires strong and sustained leadership involvement. GAO’s guidance on human capital strategic planning also emphasizes the shift in the role of the human capital function from a support function to one that is integral to achieving the agency’s mission. In addition, leadership is needed to foster an agency’s vision, align organizational components, and build commitment to the vision at all levels of the organization. For DOD, leadership involvement from leaders at lower levels of the organization was particularly critical since it became apparent in the first cycle of planning that attempting to develop a workforce plan at an agencywide level for a disparate organization such as DOD was almost impossible. This is because the various business units within an agency have very different missions, workforce characteristics, and needs. At the same time, DOD recognized that additional authority needed to be provided to managers within business units for making any needed changes as they developed their plans. For example, these managers might not have had additional hiring authority to address the gaps they identified. DOD officials noted that providing such authority may require policy, regulatory, or statutory changes. Another leadership challenge facing DOD was that some DOD components lacked buy-in on the importance of acquisition workforce planning. A consultant hired to assist DOD’s acquisition workforce planning efforts said that one reason managers view workforce planning skeptically is because the results of such efforts are difficult to measure, and the costs can be significant. DOD officials acknowledged, however, that although the costs may be significant, the costs of making decisions without the necessary information would be equally significant and could lead to worse problems. Our guidance reflects this view as well. Another deficiency identified by DOD in its first planning effort was the lack of guidance that identified what DOD’s goals were for human capital and how planning efforts should be carried out. Without a clearly articulated statement of intent, DOD components lacked a strong rationale for developing a view of what the future workforce should look like. Moreover, without guidance on how the planning should be done, components took differing approaches to their analyses. In assessing the results of its first planning cycle, DOD found that it lacked essential strategic planning tools, including systems that could accumulate and report all data needed for its forecasting efforts, models for projections, and planning guidance. Our own guidance recognizes such tools as essential to successful strategic planning. For example, our guidance points out that valid and reliable data are critical not only to assess an agency’s workforce requirements, but also to heighten an agency’s ability to manage risk by allowing managers to spotlight areas for attention before crises develop and identify opportunities for enhancing agency results. Another factor complicating the components’ workforce planning efforts was the difficulty in obtaining data needed to develop plans. Officials at one DOD component, for example, told us that they had to use three different data systems in an attempt to identify the characteristics of the current workforce, and even then they were not sure that the data was accurate. One system was used to obtain data on such things as pay grade, job series, and location; another system was used to extract retirement data; and a “home-grown” attrition model was used to project how many people would leave, die, and retire based on historical trends. A consultant in the first planning effort also told us that most of the models used to make projections were rudimentary, at best, and that forecasting data important to making projections was incomplete, missing, and/or inaccurate. DOD is working to overcome the problems experienced during the first planning cycle. It held working group meetings with the components to gather lessons learned and develop recommendations to improve the quality of the data for the second planning cycle, which began in January 2002. For the second cycle, DOD issued guidance that was expected to help components identify future workforce requirements. DOD officials expect that each cycle will improve as the planners gain experience. GAO-02-373SP. year), because the components see the value of strategic planning for the acquisition workforce. DOD is now implementing a workforce data management strategy to improve the collection and storage of personnel data. The intent is to identify new data requirements and information needs for strategic planning. In addition, DOD is working to develop more sophisticated modeling tools. As part of the second strategic planning cycle, DOD hosted a workshop for its components to discuss tools that would support the workforce planning effort. These tools included a workforce model that provides a current view of the workforce, an aging projection model that predicts what the current acquisition workforce inventory would look like within a certain period of time, and a future requirements determination model that ties workload to resource allocation and projects how changes in workload will affect resource use in the future. DOD officials expect that these tools will improve the results of the second planning cycle and also expect the tools themselves to improve in the future. Procurement reforms, technological changes, and downsizing have placed unprecedented demands on the acquisition workforce. Acquisition workers are now expected to have a much greater knowledge of market conditions, industry trends, and the technical details of the commodities and services they procure. For this reason, any agency that relies heavily on acquisition to accomplish its mission stands to benefit greatly by developing strategic human capital plans that define the capabilities that will be needed by the workforce in the future, as well as strategies that can help the workforce meet these capabilities. While the civilian agencies we reviewed are generally in the early stages of this process, DOD’s experience highlights the need to provide the right foundation for planning. This includes obtaining appropriate data collection and modeling tools, planning expertise, and management buy-in. More important, DOD’s experience has shown that strategic workforce planning is not an easy task and can take several years to accomplish. This makes it especially important for agencies to sustain strong leadership and support for the planning effort and to be able to learn from each other’s experiences, with assistance from procurement executives and organizations such as the OFPP. In order to leverage the experiences of federal agencies’ efforts, including those of DOD, to address future acquisition workforce needs, we recommend that the OFPP Administrator work with procurement executives to ensure that the lessons learned from these efforts are shared with all federal agencies as they continue with their initiatives to improve the acquisition workforce. DOD, NASA, HHS, and DOE provided written comments on a draft of this report. OFPP and Treasury provided comments via e-mail. VA and GSA chose not to provide comments. All the agencies generally agreed with our findings and recommendation. However, OFPP noted that the role of the PEC is likely to change in the future and therefore suggested our recommendation direct the Administrator of the OFPP to work with procurement executives, rather than with the PEC. We have made this change. DOD and NASA concurred with our findings and had no further comment. Their comments appear in appendix I and appendix II, respectively. HHS concurred with our findings, but provided technical comments, including clarifying that it views acquisition as critical to mission success, although acquisition is not a primary function of the agency. We incorporated these technical comments as appropriate. HHS’s formal comments appear in appendix III. Treasury provided technical comments, including one focused on distinguishing between permanent and seasonal workers in its workforce. We incorporated the comments as appropriate. DOE provided technical comments, which we incorporated as appropriate, and it expressed four concerns. First, DOE made the distinction between its acquisition workforce and its contracting workforce. We added language to reflect this distinction. Second, DOE noted that our report does not appear to recognize its ongoing efforts to evaluate and adjust its overall workforce and acquisition workforce strategic plans, nor does our report note that DOE continually evaluates the effectiveness of its programs. We asked DOE to provide more information on the evaluation process, and a DOE official stated that while evaluation does occur, there is no formal process for doing so, nor is there any documentation of such evaluation. Third, DOE asked us to provide more detail about its formal succession plan program. We believe our report already captures this information, but in a summarized manner. The information on pages 10 to 15 is meant to display the highlights of agencies’ efforts to address acquisition workforce issues. Finally, DOE believed that the lack of management support did not pose a challenge to its efforts to improve the acquisition workforce, but that a lack of resources to implement developmental programs has been a challenge. While we agree that DOE’s top management has been supportive of workforce planning, our allusion to the lack of management support for workforce planning efforts refers to a lack of support at lower levels of management. We have modified the report to explain this issue and to address the lack of resources. DOE’s comments appear in appendix IV. To determine civilian agencies’ efforts to address their future workforce needs, we interviewed the procurement executives and other acquisition officials at GSA, NASA, DOE, VA, HHS, and Department of Treasury, and we reviewed documents that they provided. These six agencies accounted for about 72 percent of the federal dollars contracted by civilian (non- DOD) agencies in fiscal year 2001. We did not assess the effectiveness of the agencies’ efforts or validate the data they provided. In addition, we contacted officials at OPM and OFPP to determine what guidance may have been provided to assist agencies with their acquisition workforce planning efforts. We also interviewed officials with the PEC to obtain their views on future acquisition workforce issues. To identify the lessons learned from DOD’s efforts to develop strategic plans for its acquisition workforce, we reviewed DOD’s report on the implementation of the Task Force 2005 recommendations. We interviewed officials from the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics; acquisition management officials for the military services; and other officials representing DCAA, DCMA, and DLA. In addition, we obtained relevant documents and interviewed DOD and contractor officials involved in DOD’s strategic planning efforts. We conducted our review between December 2001 and October 2002 in accordance with generally accepted government auditing standards. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. At that time, we will send copies to other interested congressional committees; the secretaries of Defense, Army, Air Force, Navy, Energy, Health and Human Services, Treasury, and Veteran’s Affairs; and the administrators of the General Services Administration, the National Aeronautics and Space Administration, and the Office of Federal Procurement Policy. We will also make copies available to others upon request. In addition, the report will available at no charge on the GAO Web site at http://www.gao.gov. Please contact me at (202) 512-4125, or Hilary Sullivan at (214) 777-5652, if you have any questions regarding this report. Major contributors to this report were Vijay Barnabas, Cristina Chaplain, William Doherty, Enemencio Sanchez, Sylvia Schatz, and Edward Stephenson.
The federal government is dramatically changing the way it purchases goods and services--by relying more on judgment and initiative versus rigid rules to make purchasing decisions. At the same time, agencies are dealing with reductions in the civilian acquisition workforce. GAO was asked to determine what efforts federal civilian agencies are making to address their future acquisition workforce needs. GAO looked at the efforts six civilian agencies are undertaking to address their future acquisition workforce needs. Together, these agencies account for about 72 percent of civilian agency contracting dollars. All of these agencies are taking steps to address their future acquisition workforce needs. Three--the Departments of Energy and Veterans Affairs (VA) and the General Services Administration--are developing specific plans to strengthen their acquisition workforces, and three others--the Departments of Treasury and Health and Human Services and the National Aeronautics Space Administration (NASA)--are including their acquisition workforces in their overall plans to strengthen human capital. All are implementing new or strengthening existing career development and training programs. NASA and VA are also developing new information management systems. The agencies, however, are facing considerable challenges to making their human capital strategic plans and training programs a success. Principally: most acquisition professionals will need to acquire a new set of skills focusing on business management. Because of a more sophisticated acquisition environment, they can no longer be merely purchasers or process managers. Instead, they will also need to be adept at analyzing business problems and assisting with developing strategies in the early stages of the acquisition. Beyond this immediate transformation, it is difficult for agencies to forecast what will be needed in terms of numbers of workers, skills, and expertise in the years to come. Rules, regulations, and agency missions are always changing, and budgets are constantly shifting. Many agencies simply lack good data on their workforces, including information on workforce size and location, knowledge and skills, attrition rates, and retirement rates. This data is critical to mapping out the current condition of the workforce and deciding what needs to be done to ensure that the agency has the right mix of skills and talent for the future. In overcoming these challenges, agencies can learn from the Department of Defense (DOD), which has made progress in acquisition workforce strategic planning and has addressed some of the same issues. DOD officials learned that the strategic planning effort was going to take a long time and that effective leadership and guidance, along with technology and sound methodology, were required to accurately forecast workforce needs.
NTIA and RUS have until September 30, 2010, to obligate the Recovery Act funding for broadband projects. While the completion time will vary depending on the complexity of the project, recipients of BTOP grants and BIP awards must substantially complete projects supported by these programs no later than 2 years, and projects must be fully completed no later than 3 years, following the date of issuance of the award. As we reported in November 2009, NTIA and RUS faced a number of challenges in evaluating applications and awarding broadband stimulus funds during the first funding round. For example, although both agencies had previously administered small telecommunications grant or loan programs, they had to review more applications and award far more funds with fewer staff to carry out their Recovery Act programs. In addition, the agencies faced tight time frames for awarding funds. To address these challenges, NTIA and RUS awarded contracts to Booz Allen Hamilton and ICF International, respectively, to help the agencies implement the programs within the required time frames. The contractors have supported the development and implementation of application review processes, helped with the review of technical and financial materials, and assisted in the development of postaward monitoring and reporting requirements. To meet the September 30, 2010, deadline to award Recovery Act funds, NTIA and RUS have established project categories for directing funds to meet the act’s requirements; released two funding notices; conducted public outreach to increase participation among all eligible entities; developed processes to accept, evaluate, advance, and award applications; and advanced efforts to oversee recipients to ensure proper use of Recovery Act funds. For the first funding round, NTIA and RUS coordinated their efforts and issued one joint funding notice detailing the requirements, rules, and procedures for applying for funding. The first 18 broadband stimulus awards were announced on December 17, 2009. NTIA and RUS completed the first round of awards on April 26 and March 30, 2010, respectively. Table 1 shows the funding timeline for NTIA’s and RUS’s broadband stimulus programs. Table 2 summarizes the categories of projects eligible for funding during the first round for both BTOP and BIP. Based on the agencies’ experiences with the first round, and drawing on public comments, both NTIA and RUS made changes to how the second- round funding for BTOP and BIP will be structured and conducted. Unlike the first round, NTIA and RUS issued separate funding notices and applicants had the option of applying to either BTOP or BIP, but not to both. In the second round, NTIA will again award grants for three categories of eligible projects, however the infrastructure program has been reoriented toward Comprehensive Community Infrastructure grants, which will support Middle Mile projects serving anchor institutions such as community colleges, libraries, hospitals, universities, and public safety institutions. RUS has prioritized Last Mile projects and added 3 new grant programs: Satellite, Rural Library, and Technical Assistance projects. Table 3 provides information on the second-round project categories. The first funding notice, published July 9, 2009, set forth the processes for reviewing applications that NTIA and RUS followed during the first funding round. Both agencies developed a multistep application review process designed to balance the applicants’ need for time to prepare their applications with the agencies’ need for time to review them, as well as to minimize the burden on the applicants that did not ultimately qualify for program funding. Generally, both agencies initially screened applications to determine whether they were complete and eligible and then submitted the qualifying applications to a due-diligence review. For this review, the applicants were asked to submit additional documentation to further substantiate their financial, technical, and other project information. Table 4 compares the agencies’ first-round application review processes. In addition to implementing the BTOP program, NTIA is implementing the broadband mapping provisions referenced in the Recovery Act. Up to $350 million of the $4.7 billion was available to NTIA pursuant to the Broadband Data Improvement Act and for the purpose of developing and maintaining a nationwide map of broadband service availability. NTIA explained that this program would fund projects that collect comprehensive and accurate state-level broadband mapping data, develop state-level broadband maps, aid in the development and maintenance of a national broadband map, and fund statewide initiatives directed at broadband planning. NTIA accepted applications for the State Broadband Data and Development Grant program until August 14, 2009. NTIA originally funded state data collection efforts for a 2-year period, allowing the agency to assess initial state activities before awarding funding for the remainder of this 5-year initiative. On May 28, 2010, NTIA announced that state governments and other existing awardees had until July 1, 2010, to submit amended and supplemental applications for 3 additional years of mapping and data collection activities and to support all other eligible purposes under the Broadband Data Improvement Act. In the first round of broadband stimulus funding, NTIA and RUS received almost 2,200 applications and awarded 150 grants, loans, and loan/grant combinations totaling over $2.2 billion in federal funds to a variety of entities for projects in nearly every state and U.S. territory. This funding includes over $1.2 billion for 82 BTOP projects and more than $1 billion for 68 BIP projects. More than 70 percent of these projects were awarded to non-governmental entities, such as for-profit corporations, nonprofit organizations, and cooperative associations. Ten BTOP and 3 BIP grants were awarded to applicants with multistate projects. For example, RUS awarded a grant to Peetz Cooperative Telephone Company for a Last Mile Remote project covering parts of Colorado and Nebraska and NTIA awarded a grant to One Economy Corporation for a Sustainable Broadband Adoption project covering parts of 32 states. Figure 1 illustrates the locations of the broadband stimulus projects and the total project funding per state awarded in the first round. BTOP. During the first funding round, NTIA awarded more than $1 billion in BTOP funds for 49 broadband infrastructure projects to deploy Middle Mile and Last Mile broadband technology to unserved and underserved areas of the United States; $57 million for 20 Public Computer Center projects to provide access to broadband, computer equipment, computer training, job training, and educational resources to the general public and specific vulnerable populations; and $110 million for 13 Sustainable Broadband Adoption projects to promote broadband demand through innovation, especially among vulnerable population groups that have traditionally underused broadband technology. NTIA awarded grants to a variety of entities in the first funding round, including public entities, for-profits, nonprofits, cooperative associations, and tribal entities. Our analysis of NTIA’s data shows that public entities, such as states, municipalities, or other local governments, received the largest number of BTOP grants and largest percentage of the funding. This funding supports BTOP projects in 45 states and territories. Table 5 shows the entity types and the amounts of funding per entity type during the first round. Of the 82 grants awarded, over half were for infrastructure projects, and NTIA awarded over 40 percent of these grants to for-profit entities in the first round. NTIA awarded Public Computer Center and Sustainable Broadband Adoption projects to public entities and nonprofit organizations. Table 6 shows the types of entities awarded funds for each BTOP funding category. BIP. During the first funding round, RUS announced 49 broadband infrastructure awards totaling nearly $740 million in program funding for Last-Mile nonremote projects, 13 awards totaling $161 million for Last Mile remote projects, and 6 awards totaling $167 million for Middle Mile broadband infrastructure projects. The majority of funding was awarded in the form of loan/grant combinations. Of the nearly $1.1 billion in first round funding, RUS awarded 53 loan/grant combinations totaling over $957 million in program funds, 12 grants totaling about $69 million, and 3 loans totaling over $41 million. RUS awarded grants, loans, and loan/grant combinations to a variety of entities. Eighty-five percent of BIP recipients are for-profit companies or cooperative associations. Four tribal entities also received BIP funding. In addition, 43 of the 68 BIP recipients are Title II borrowers and have previously received rural electrification and telephone loans from RUS. These represent the incumbent local telecommunications providers in the funding area. Table 7 shows the entity types and amount of funding received during the first round. RUS made nearly three-quarters of its awards for Last Mile non-remote projects and the majority of these awards went to for-profit and cooperative associations. Table 8 shows the types of entities that received awards and the number of projects awarded in each BIP funding category. As of June 29, 2010, RUS had provided $899.6 million in program funds for 61 of these 68 projects, representing approximately 85 percent of the awards announced in the first round. This amount represents about $485 million charged against RUS’s Recovery Act budget authority. Of the remaining projects, 4 are still in the contract award process and 3 awards were declined by the recipients. To substantiate information in the applications, NTIA, RUS, and their contractors reviewed financial, technical, environmental, and other documents and determined the feasibility and reasonableness of each project. The agencies reviewed application materials for evidence that the applicants satisfied the criteria established in the first funding notice. The first funding notice identified several types of information that would be subject to due-diligence review, including details related to the following items: Proposed budget, capital requirements and the source of these funds, and operational sustainability. Technology strategy and construction schedule, including a map of the proposed service area and a diagram showing how technology will be deployed throughout the project area (for infrastructure projects) and a timeline demonstrating project completion. Completed environmental questionnaire and historic preservation documentation. Evidence of current subscriber and service levels in the project area to support an “unserved” or “underserved” designation. Recipient’s eligibility to receive a federal award. Any other underlying documentation referenced in the application, including outstanding and contingent obligations (debt), working capital requirements and sources of these funds, the proposed technology, and the construction build-out schedule. To implement the due-diligence review, the agencies with their contractors reviewed the application materials for adherence to the first- round funding notice’s guidelines. The contractors formed teams with specific financial or technical expertise to perform the due-diligence evaluation of applications. Generally, the agencies followed similar due- diligence review processes, but there were some differences. For example, NTIA teams analyzed and discussed the application materials and assigned scores to applications based on the criteria established in the first-round funding notice: (1) project purpose, (2) project benefits, (3) project viability, and (4) project budget and sustainability. Also, NTIA teams contacted applicants when necessary to obtain additional materials or clarify information in the application. Both NTIA and RUS officials reviewed environmental questionnaires addressing National Environmental Policy Act (NEPA) concerns and other documents addressing National Historic Preservation Act (NHPA) concerns. Agency officials requested that applicants provide full environmental and historical impact reports for their projects unless the projects received an exclusion. At the time we reviewed our sample of application files, these reports were pending for NTIA applications; all RUS applications we reviewed received categorical exclusions. During the due-diligence review, agency officials said that the contractor teams had frequent contact with NTIA and RUS to discuss issues that arose during the review. The review teams produced detailed briefing reports describing the information contained in each file and used professional judgment to make recommendations as to each project’s viability and sustainability, and the applicant’s apparent capacity to implement and maintain the project. Agency officials used these reports and other information in making award decisions. The review teams also recommended follow-up actions the agencies might consider to gather more information on unresolved issues. Both agencies’ officials reported that they were satisfied with the quality of their contractors’ work. Based on our analysis of the files of 32 awarded applications, we found that the agencies consistently reviewed the applications and substantiated the information as specified in the first-round funding notice, a finding consistent with the Department of Commerce Inspector General’s April 2010 report. In each of the files we reviewed, we observed written documentation that the agencies and their contractors had reviewed and verified pertinent application materials, or made notes to request additional documentation where necessary. In general, we saw evidence that the agencies and the contractors verified the following information: basic fit with the programs (project descriptions); financial reasonableness (capital and operating budgets, financial statements); technological viability (maps of the proposed coverage area, a description of the technology to be used and how it would be employed); environmental and historic preservation/remediation; project planning (construction schedules, project milestones); organizational capacity (resumes or biographies of the principals involved in the project, matching funds, support from both the affected communities and other governmental entities); and congressional districts affected. The two agencies developed different processes to investigate the merits of public comments on whether proposed projects met the definition of “unserved” or “underserved” published in the first funding notice. This investigation is known as an “overbuild analysis” and is needed because of the continued lack of national broadband data. In general, the public comments were submitted by companies that claimed they were already providing service in the proposed service areas and that the applicant’s project would thus lead to overbuilding. NTIA’s contractor researched the commenting companies’ claims of provided service via industry databases, the companies’ Web sites and advertisements, and then produced an overbuild analysis for review by agency officials that described the research results and the contractor’s level of confidence in the accuracy of the analysis. For RUS, field staff personally contacted the entities that submitted the comments to verify their claims that they provided service in the affected areas. According to RUS, field staff reconciled any difference between the application and commenter, and where necessary, conducted an actual field visit to the proposed service territory. In all cases in our sample, we observed that agencies and their contractors found that the projects met the definitions of “unserved” and “underserved” set forth in the first funding notice. In at least one case, public comments were retracted following a request for additional information; in other cases, the additional information provided did not support claims of overbuilding. Finally, we interviewed representatives of five industry associations and two companies that received funding during the first round to learn their perspectives on the thoroughness of the due-diligence reviews. Generally, the industry association representatives confirmed that their constituents who had applied for and received broadband funding had undergone due- diligence reviews, but they were not familiar with the extent to which NTIA and RUS had verified applicant information. According to representatives of two companies that received funding during the first round, the agencies’ due-diligence process was thorough and rigorous. During the second funding round, NTIA and RUS have more funds to award and less time to award these funds than they had for the first round, and although the agencies received fewer applications for the second round, they are conducting more due-diligence reviews than they did for the first round. NTIA and RUS have until September 30, 2010, to obligate approximately $4.8 billion in remaining broadband stimulus funds, or more than twice the $2.2 billion they awarded during the first funding round. More specifically, in the second funding round, NTIA must award $2.6 billion in BTOP grants and RUS must award $2.1 billion in BIP loans and loan/grant combinations. Moreover, NTIA has 2 fewer months in the second funding round to perform due-diligence reviews and obligate funds for selected BTOP projects than in the first funding round, and RUS has 3 months less for BIP. Whereas NTIA took 8 months for these tasks during the first funding round from the August 20, 2009, application deadline through April 26, 2010, it has 6 months for the second round, from the March 26, 2010, application deadline to the program’s September 30, 2010, obligation deadline. Similarly, RUS took at least 9 months for the first funding round and has 6 months for the second round. (As of July 1, 2010, RUS had not obligated funds for four first-round awards.) For the second funding round, NTIA and RUS received 1,662 applications, compared with 2,174 for the first round. For the first round, NTIA reviewed 940 applications for BTOP, RUS reviewed 401 applications for BIP, and the agencies concurrently reviewed 833 joint applications for both programs. For the second funding round, NTIA received 886 applications for BTOP and RUS received 776 for BIP. No joint applications were solicited for the second round as the agencies published separate funding notices. As of July 2, 2010, NTIA and RUS have awarded a total of 66 second round broadband stimulus projects totaling $795 million. While NTIA and RUS have fewer applications to review for the second round, they expect their due-diligence workload to increase. According to agency officials, the quality of the second-round applications is substantially better and more applications will be eligible for due-diligence reviews. Agency officials believe that their staffs’ increased experience, together with some process changes implemented in response to lessons learned during the first funding round (discussed later in this report) will enable their staffs to manage the increased workload and maintain the same high standards in the time allotted. However, as the Recovery Act’s obligation deadline draws near, the agencies may face increased pressure to approve awards. Agency officials state that their programs’ goals remain to fund as many projects as possible that meet the requirements of the act and to select the projects that will have the most economic impact; simply awarding funds is not the goal. The continued lack of national broadband data complicates NTIA and RUS efforts to award broadband stimulus funding in remote, rural areas where it may be needed the most. Although NTIA recently issued grants to states and territories to map broadband services, the National Broadband Map showing the availability of broadband service will not be completed until 2011. The most recent FCC report on currently available Internet access nationwide relies on December 2008 data. Because of the lack of current data, NTIA and RUS are using a cumbersome process to verify the status of broadband services in particular geographic locations. The agencies must collect and assess statements by applicants as well as the aforementioned public comments submitted by existing broadband providers delineating their service areas and speeds available. NTIA and RUS are investing time and resources to review these filings, and in some cases due-diligence reviews have found information in the filings to be inaccurate. During our review of 32 judgmentally selected applications, we found several instances noted by RUS in which companies provided inaccurate information when claiming they were already providing service in a proposed service area. For example, when an RUS field representative asked one company to provide supporting information to verify its number of subscribers in its service area during the due-diligence review process, the company admitted the information in its filing was incorrect and withdrew the comment. In addition, for a number of applications we reviewed, NTIA’s contractor had a low or medium level of confidence in the accuracy of the overbuild analysis because data were inconclusive. Because the National Broadband Map will not be completed until 2011, NTIA and RUS will have to complete awards for round two based on existing data. Both agencies have taken steps to streamline their application review processes in an effort to obligate the remaining funds by September 30, 2010. First, the agencies agreed to generally target different types of infrastructure projects and issued separate funding notices for the second round to save time during the eligibility screening phase. Second, the agencies reduced the number of steps in the application review process from two to one, adding some time to the application window and agency review process. NTIA also reduced the basic eligibility factors for BTOP from five to three, moved from a largely unpaid to a paid reviewer model to ensure that reviews were conducted in a timely fashion, and decreased the number of reviewers per application from three to two. These steps allowed the agency to complete the initial portion of its review ahead of schedule, according to BTOP officials. NTIA also split the second round applications into four groups for due-diligence reviews, allowing staff to concentrate on one group at a time. Due-diligence reviews for the first group were completed in June; awards for this group will be announced in July. Reviews for the second group will be completed in July, with awards to be announced in August; reviews for the third and fourth group will be complete in August, with final awards to be announced in September. Third, NTIA began to use Census tract data, which companies already compile and report to FCC, to verify applicants’ claims and simplified the process to allow existing broadband providers to supply information about their services. RUS is relying on its mapping tool, which does show Census block data, but not Census track data, to determine whether the service area is eligible. According to RUS officials, the tool has been upgraded several times to make it easier for applicants to submit information about existing service providers to the agency. Finally, RUS eliminated funding for the Last-Mile Remote project designation, reducing the number of project types to screen for award, and also stopped accepting paper applications. Notwithstanding these efficiencies, a few second round changes may lengthen the time required to complete due-diligence reviews and obligate funds. For example, on May 28, 2010, after the application deadline was closed for round two, NTIA notified State Broadband Data and Development Grant program recipients that they were able to submit amended and supplemental applications for eligible mapping activities in those states. With regards to BTOP, NTIA also solicited applications for public safety broadband infrastructure projects nationwide through July 1, 2010, which adds additional burden on the agency. The time remaining for due diligence to be performed on these applications is a month shorter than for the first group of round two applications. RUS increased the opportunity for more applications to obtain funding by instituting a “second-chance review” process to allow an applicant to adjust an application that may not have contained sufficient documentation to fully support an award. During the second-chance review, BIP application reviewers will work with applicants to assist them in providing the documentation needed to complete their applications. Adding these activities to the BIP application reviewers’ duties may lengthen the time required to complete due-diligence reviews and obligate funds by September 30, 2010. Both agencies have renegotiated with their contractors for greater staffing flexibility. RUS has extended its contract with ICF International to provide BIP program support through 2012. In addition, RUS also indicated that its previously established broadband support program made no awards in 2010, freeing staff time for BIP activities. Despite this, NTIA and RUS officials told us that existing staff are overworked and there has been some turnover with contractor support. With the completion of second round funding and the beginning of the postaward phase, it will be critical for NTIA and RUS to ensure that they have enough staff dedicated to project oversight. Under Section 1512 of the Recovery Act and related OMB guidance, all nonfederal recipients of Recovery Act funds must submit quarterly reports that are to include a list of each project or activity for which Recovery Act funds were expended or obligated and information concerning the amount and use of funds and jobs created or retained by these projects and activities. Under OMB guidance, awarding agencies are responsible for ensuring that funding recipients report to a central, online portal no later than 10 calendar days after each calendar quarter in which the recipient received assistance. Awarding agencies must also perform their own data-quality review and request further information or corrections by funding recipients, if necessary. No later than 30 days following the end of the quarter, OMB requires that detailed recipient reports be made available to the public on the Recovery.gov Web site. In addition to governmentwide reporting, BTOP and BIP funding recipients must also submit program-level reports. BTOP-specific reports. The Recovery Act requires BTOP funding recipients to report quarterly on their use of funds and NTIA to make these reports available to the public. Specifically, NTIA requires that funding recipients submit quarterly reports with respect to Recovery Act reporting, as well as BTOP quarterly and annual financial and performance progress reports. BTOP financial reports include budget and cost information on each quarter’s expenses and are used to assess the overall financial management and health of each award and ensure that BTOP expenditures are consistent with the recipient’s anticipated progress. BTOP performance reporting includes project data, key milestones, and project indicator information, such as the number of new network miles deployed, the number of new public computer centers, or the number of broadband awareness campaigns conducted. BIP-specific reports. RUS requires BIP funding recipients to submit quarterly balance sheets, income and cash-flow statements, and data on how many households are subscribing to broadband service in each community, among other information. In addition, RUS requires funding recipients to specifically state in the applicable quarter when they have received 67 percent of the award funds, which is RUS’s measure for “substantially complete.” BIP funding recipients must also report annually on the number of households; businesses; and education, library, health care, and public safety providers subscribing to new or more accessible broadband services. A final source of guidance is the Domestic Working Group, which has highlighted leading practices in grants management. Effective grants management calls for establishing adequate internal control systems, including efficient and effective information systems, training, policies, and oversight procedures, to ensure grant funds are properly used and achieve intended results. Some agencies have developed risk-based monitoring criteria to assess where there is a need for heightened monitoring or technical assistance. These criteria can include total funding, prior experience with government grants or loans, independent audit findings, budget, and expenditures. Given the large number of BTOP and BIP grant and loan recipients, including many first-time recipients of federal funding, it is important that NTIA and RUS identify, prioritize, and manage potential at-risk recipients. NTIA. NTIA has developed and is beginning to implement a postaward framework to ensure the successful execution of BTOP. This framework includes three main elements: (1) monitoring and reporting, (2) compliance, and (3) technical assistance. NTIA will use desk reviews and on-site visits to monitor the implementation of BTOP awards and ensure compliance with award conditions by recipients. NTIA plans to provide technical assistance in the form of training, webinars, conference calls, workshops, and outreach for all recipients of BTOP funding to address any problems or issues recipients may have implementing the projects, as well as to assist in adhering to award guidelines and regulatory requirements. NTIA has provided training to recipients in grant compliance and reporting, and has also developed a recipient handbook with a number of checklists to assist recipients with performance and compliance under their federal awards. In addition, NTIA has developed training, handbooks, and other guidance for program staff and grant recipients throughout the entire postaward process and through the completion of BTOP projects in 2013. According to NTIA officials, the agency is preparing a risk-based model for postaward project monitoring and designating three levels of monitoring for grant recipients: routine, intermediate, and advanced. Under this model, program staff will reassess the risk level of each recipient on an annual basis and conduct site visits accordingly. NTIA has recently reorganized several senior positions to distribute grants management and grants administration responsibilities more evenly among a larger group of personnel, and to more effectively balance workloads. As a result, more NTIA employees will share postaward responsibilities up to September 30, 2010. For fiscal year 2011, the President’s budget request includes nearly $24 million to continue oversight activities, yet even if this amount is appropriated, agency officials said that there is some risk that NTIA will have insufficient resources to implement this comprehensive postaward framework. RUS. RUS is also putting into place a multifaceted oversight framework to monitor compliance and progress for recipients of BIP funding. Unlike NTIA, which is developing a new oversight framework, RUS plans to replicate the oversight framework it uses for its existing Community Connect, Broadband Access and Loan, Distance Learning and Telemedicine, and Rural Electrification Infrastructure Loan programs. However, RUS still has several open recommendations from a Department of Agriculture Inspector General’s report pertaining to oversight of its grant and loan programs. The main components of RUS’s oversight framework are (1) financial and program reporting and (2) desk and field monitoring. According to RUS officials, no later than 30 days after the end of each calendar-year quarter, BIP recipients will be required to submit several types of information to RUS through its Broadband Collection and Analysis System, including balance sheets, income statements, statements of cash flow, summaries of rate packages, the number of broadband subscribers in each community, and each project’s completion status. BIP funding recipients will also be required to submit detailed data on the numbers of households and businesses subscribing to or receiving improved broadband service and the numbers of schools, libraries, health care facilities, and public safety organizations obtaining either new or improved access to broadband service. In addition, RUS will conduct desk and site reviews using 52 permanent general field representatives and field accountants. RUS also has access to 15 additional temporary field staff who can assist with BIP oversight. Moreover, RUS extended its contract with ICF International through 2012, giving the agency additional resources in conducting program oversight. The President’s budget request does not include additional resources to continue BIP oversight activities in fiscal year 2011, but RUS officials believe they have sufficient resources to oversee BIP-funded recipients. Overall, both NTIA and RUS have taken steps to address the concerns we noted in our November 2009 report. For example, the agencies are developing plans to monitor BTOP- and BIP-funded recipients and are working to develop objective, quantifiable, and measurable goals to assess the effectiveness of the broadband stimulus programs. Finally, NTIA now has audit requirements in place for annual audits of commercial entities receiving BTOP grants. Despite this progress, some risks to projects’ success remain. Scale and Number of Projects. NTIA and RUS will need to oversee a far greater number of projects than in the past. As we reported in 2009, the agencies face the challenge of monitoring these projects with fewer staff than were available for their legacy grant and loan programs. Although the exact number of funded projects is still unknown, based on the first funding round’s results and the amount of funding remaining to be awarded, the agencies could fund several hundred projects each before September 30, 2010. In addition, BTOP- and BIP-funded projects are likely to be much larger and more diverse than projects funded under the agencies’ prior broadband-related programs. For example, NTIA and RUS expect to fund several types of broadband projects, and these projects will be dispersed nationwide, with at least one project in every state. NTIA is funding several different types of broadband projects, including Last Mile and Middle Mile broadband infrastructure projects for unserved and underserved areas, public computer centers, and sustainable broadband adoption projects. RUS can fund Last Mile and Middle Mile infrastructure projects in rural areas across the country. Adding to these challenges, NTIA and RUS must ensure that the recipient constructs the infrastructure project in the entire project area, not just the area where it may be most profitable for the company to provide service. For example, the Recovery Act mandates that RUS fund projects where at least 75 percent of the funded area is in a rural area that lacks sufficient access to high-speed broadband service to facilitate rural economic development; these are often rural areas with limited demand, and the high cost of providing service to these areas make them less profitable for broadband providers. The rest of the project can be located in an area that may already have service from an existing provider. Companies may have an incentive to build first where they have the most opportunity for profit and leave the unserved parts of their projects for last in order to achieve the highest number of subscribers as possible. In addition, funding projects in low-density areas where there may already be existing providers could potentially discourage further private investment in the area and undermine the viability of both the incumbents’ investment and the broadband stimulus project. During our review of BIP applications, we found several instances in which RUS awarded projects that would simultaneously cover unserved areas and areas with service from an existing provider. To ensure that Recovery Act funds reach hard-to-serve areas, recipients must deploy their infrastructure projects throughout the proposed area on which their award was based. NTIA and RUS oversight and monitoring procedures will help ensure that the unserved areas are in fact built out. Lack of Sufficient Resources. Both NTIA and RUS face the risk of having insufficient staff and resources to actively monitor BTOP- and BIP- funded projects after September 30, 2010. BTOP and BIP projects must be substantially complete within 2 years of the award date and fully complete within 3 years of the award date. As a result, some projects are not expected to be complete until 2013. However, the Recovery Act does not provide budget authority or funding for the administration and oversight of BTOP- and BIP-funded projects beyond September 30, 2010. Effective monitoring and oversight of over $7 billion in Recovery Act broadband stimulus funding will require significant resources, including staffing, to ensure that recipients fulfill their obligations. NTIA and RUS officials believe that site visits, in particular, are essential to monitoring progress and ensuring compliance; yet, it is not clear if they will have the resources to implement their oversight plans. As discussed earlier, NTIA requested fiscal year 2011 funding for oversight, but the agency does not know whether it will receive the requested funding and whether the amount would be sufficient. RUS intends to rely on existing staff and believes it has sufficient resources; however, RUS field staff members have other duties in addition to oversight of BIP projects. Because of this, it is critical that the oversight plans the agencies are developing recognize the challenges that could arise from a possible lack of resources for program oversight after September 30, 2010. For example, the agencies’ staff will need to conduct site visits in remote locations to monitor project development, but a lack of resources will pose challenges to this type of oversight. Planning for these various contingencies can help the agencies mitigate the effect that limited resource levels may have on postaward oversight. The Recovery Act broadband stimulus programs are intended to promote the availability and use of broadband throughout the country, as well as create jobs and stimulate economic development. In the first round, NTIA and RUS funded a wide variety of projects in most states and territories to meet these goals. In doing so, the agencies developed and implemented an extensive and consistent process for evaluating project applications. In addition, the agencies made efforts to gather and apply lessons learned from the first funding round to the second round in order to streamline the application review process, making it easier for applicants to submit and officials to review applications. However, the agencies must also oversee funded projects to ensure that they meet the objectives of the Recovery Act. To date, NTIA and RUS have begun to develop and implement oversight plans to support such efforts and have developed preliminary risk-based frameworks to monitor the progress and results of broadband stimulus projects. However, the Recovery Act does not provide funding beyond September 30, 2010. As the agencies continue to develop their oversight plans, it is critical that they anticipate possible contingencies that may arise because of the limited funding and target their oversight resources to ensure that recipients of Recovery Act broadband funding complete their projects in a manner consistent with their applications and awards. To ensure effective monitoring and oversight of the BTOP and BIP programs, we recommend that the Secretaries of Agriculture and Commerce incorporate into their risk-based monitoring plans, steps to address the variability in funding levels for postaward oversight beyond September 30, 2010. We provided a draft of this report to the Departments of Agriculture and Commerce for review and comment. In its written comments, RUS agreed that awarding and obligating the remaining funds under the BIP program will be challenging and noted that the loan obligation process for the second funding round will be expedited because financial documents have been crafted and are now in place. In addition, RUS agreed that there is a lack of data on broadband availability throughout the country and stated that the agency is using field representatives and other Rural Development field staff to support the BIP program as needed. RUS also noted that it is developing contingency plans to retain the majority of its temporary Recovery Act staff beyond September 30, 2010. RUS took no position on our recommendation. In its comments, NTIA stated that it is on schedule to award all of its Recovery Act funds by September 30, 2010. In addition, NTIA noted that the President’s fiscal year 2011 budget request, which includes authority and funding for NTIA to administer and monitor project implementation, is vital to ensuring that BTOP projects are successful and that recipients fulfill their obligations. NTIA took no position on our recommendation. Finally, the agencies provided technical comments that we incorporated, as appropriate. RUS’s and NTIA’s full comments appear in appendixes III and IV, respectively. We are sending copies of this report to the Secretary of Agriculture and the Secretary of Commerce, and interested congressional committees. This report is available at no charge on the GAO Web site at http://www.gao.gov. If you have any further questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. The objectives of this report were to examine (1) the results of the first broadband stimulus funding round; (2) the extent to which the National Telecommunications and Information Administration’s (NTIA) and the Rural Utilities Service’s (RUS) due-diligence review substantiated information in the awardees’ applications; (3) the challenges, if any, facing NTIA and RUS in awarding the remaining broadband stimulus funds; and (4) the actions, if any, NTIA and RUS are taking to oversee grant and loan recipients. To describe the results of the first funding round, we obtained and analyzed data from NTIA and RUS and the agencies’ Web sites and press releases, interviewed agency officials, and reviewed agency program documentation. We are reporting publicly available data that NTIA and RUS provided on the first round broadband stimulus awards with the intent to describe the number of awards, the entities receiving first round funding, and the types of projects. This information is presented for descriptive purposes. The data are available online at BroadbandUSA.gov, the Web site through which NTIA and RUS publicly report Broadband Technology Opportunities Program (BTOP) and Broadband Initiatives Program (BIP) application and award data. In addition, we obtained and reviewed internal application information and award documentation from both agencies. We also interviewed NTIA and RUS officials who were involved in reviewing applications and awarding the broadband stimulus funds. During these interviews, we reviewed the progress NTIA and RUS were making to complete the first funding round and discussed the status of the awards, including the number of awards that had been obligated, and progress made during the second funding round. To familiarize ourselves with the programs and track their ongoing status, we reviewed NTIA and RUS program documentation, both publicly available online and internal documents provided by the agencies; reviewed a November 2009 GAO report on NTIA’s and RUS’s broadband stimulus programs; and reviewed April 2010 reports by the Congressional Research Service (CRS) and the Department of Commerce Inspector General (Commerce IG) covering first funding round applications, awards, and program management. To determine the extent to which NTIA’s and RUS’s due-diligence reviews substantiated information in awardees’ applications, we reviewed a judgmental sample of 32 awarded application files, including 15 from BTOP and 17 from BIP. In choosing our sample, we considered individual award amounts, aggregate amounts of awards per state or territory (state), type of project, type of applicant, and geographic location of the state. To determine our sample criteria, we analyzed descriptive statistics for all awards and grouped states into three categories: “below $50 million” (low); “between $50 million and $100 million” (middle); and “above $100 million” (high). Because BIP’s aggregate award amounts to the states to which it awarded funds were slightly higher than those for BTOP overall, we chose to review a slightly larger number of BIP application files than BTOP files. We chose states from among the three award categories so that the representation of low-, middle-, and high-award states approximated that in the overall population. After choosing our sample, we met with agency officials to discuss the contents of the application files and clarify the requirements of the due-diligence review process. Then, we arranged to inspect the agency files: RUS provided electronic access to its due-diligence materials for each application via an online Web site and we performed our file review remotely; NTIA provided us with a CD-ROM containing the relevant project files and we reviewed these at the Department of Commerce. We reviewed the decision memos summarizing the total output of the due-diligence review, documentation of environmental reviews, project budgets, construction schedules, and assessment of public notice filings. We recorded our findings on a data collection instrument and verified the results by using two separate reviewers. We did not evaluate the agencies’ decisions to award or deny applications or the potential for success of any project. Rather, we assessed the extent to which NTIA and RUS developed and implemented a due-diligence review process. In addition to reviewing the sample, we interviewed agency officials and two award recipients. To determine the challenges, if any, that NTIA and RUS face in awarding the remaining broadband stimulus funds, we studied the requirements set forth in the Recovery Act; evaluated changes between the first- and second-round funding notices; and interviewed agency officials, representatives of five telecommunications associations, and two award recipients. We also reviewed prior GAO, CRS, and Commerce IG reports to learn about issues affecting the broadband stimulus programs. We also monitored agency press releases and tracked notices published on the Broadbandusa.gov Web site. Finally, to determine the actions NTIA and RUS are taking to oversee grant and loan recipients, we interviewed agency officials about plans to monitor and oversee awardees. During these meetings, we discussed Recovery Act reporting requirements, as well as specific BTOP and BIP requirements. We also reviewed agency plans and guidance provided to recipients. We compared those plans to requirements established in the Recovery Act and guidance from the Office of Management and Budget, the Domestic Working Group, and GAO. We conducted this performance audit from February through August 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Table 9 provides information on the 10 BTOP and 3 BIP projects covering areas in multiple states. In addition to the contact named above, Michael Clements, Assistant Director; Jonathan Carver; Elizabeth Eisenstadt; Brandon Haller; Tom James; Elke Kolodinski; Kim McGatlin; Josh Ormond; and Mindi Weisenbloom made key contributions to this report.
Access to affordable broadband service is seen as vital to economic growth and improved quality of life. To extend broadband access and adoption, the American Recovery and Reinvestment Act (Recovery Act) provided $7.2 billion to the Department of Commerce's National Telecommunications and Information Administration (NTIA) and the Department of Agriculture's Rural Utilities Service (RUS) for grants or loans to a variety of program applicants. The agencies are awarding funds in two rounds and must obligate all funds by September 30, 2010. This report addresses the results of the first broadband stimulus funding round, the extent to which NTIA's and RUS's application reviews substantiated application information, the challenges facing NTIA and RUS in awarding the remaining funds, and actions taken to oversee grant and loan recipients. GAO analyzed program documentation, reviewed a judgmentally-selected sample of applications from first round award recipients, and interviewed agency officials and industry stakeholders. In the first round of broadband stimulus funding that began in July 2009 and ended in April 2010, NTIA and RUS received over 2,200 applications and awarded 150 grants, loans, and loan/grant combinations totaling $2.2 billion to a variety of entities in nearly every state and U.S. territory. This funding includes $1.2 billion for 82 projects awarded by NTIA and more than $1 billion for 68 projects awarded by RUS. NTIA primarily awarded grants to public entities, such as states and municipalities, whereas RUS made grants, loans, and loan/grant combinations primarily to private-sector entities, such as for-profit companies and cooperatives. NTIA and RUS consistently substantiated information in first round award recipients' applications. The agencies and their contractors reviewed financial, technical, environmental, and other documents and determined the feasibility and reasonableness of each project. GAO's review of 32 award recipient applications found that the agencies consistently reviewed the applications and substantiated the information as specified in the first funding notice. In each of the files, GAO observed written documentation that the agencies and their contractors reviewed and verified pertinent application materials, and requested additional documentation where necessary. To meet the Recovery Act's September 30, 2010, deadline for obligating broadband funds, NTIA and RUS must award approximately $4.8 billion--or more than twice the amount they awarded during the first round--in less time than they had for the first round. As the end of the Recovery Act's obligation deadline draws near, the agencies may face increased pressure to approve awards. NTIA and RUS also lack detailed data on the availability of broadband service throughout the country, making it difficult to determine whether a proposed service area is unserved or underserved, as defined in the program funding notices. To address these challenges, NTIA and RUS have streamlined their application review processes by, for example, eliminating joint reviews and reducing the number of steps in the due-diligence review process, and NTIA began using Census tract data to verify the presence of service. NTIA and RUS are putting oversight plans in place to monitor compliance and progress for broadband stimulus funding recipients, but some risks remain. The agencies will need to oversee far more projects than in the past and these projects are likely to be much larger and more diverse than projects funded under the agencies' prior broadband-related programs. Additionally, NTIA and RUS must ensure that the recipients construct the infrastructure projects in the entire project area, not simply the area where it may be most profitable for the company to provide service. Both NTIA and RUS face the risk of having insufficient resources to actively monitor Recovery Act funded broadband projects. Because of this, planning for a possible lack of resources for program oversight after September 30, 2010, can help the agencies mitigate the effect of limited resources on postaward oversight. The Secretaries of Agriculture and Commerce should incorporate into their risk-based monitoring plans, steps to address variability in funding levels for postaward oversight beyond September 30, 2010. Both agencies took no position on GAO's recommendation and noted steps being taken to complete their respective programs.
To assess how agencies are using the results of single audits, we conducted a survey of the 24 agencies subject to the CFO Act. We pretested our survey with one federal agency, solicited comments from OMB, and modified the survey based on the comments we received. The survey included two sections. The first section captured background information on agency federal awards programs, the single audit process from an agencywide perspective, and the offices within the agency that are responsible for fulfilling the task of implementing the various single audit responsibilities defined under OMB Circular A-133. The second part of the survey captured information on how agency CFO, IG, and program offices use the results of single audits in each agency’s largest grant program. We distributed the surveys to the agencies for completion. We then performed follow-up interviews with representatives from CFO, IG, and program offices to obtain, discuss, and clarify their survey responses. Our survey results reflect the information provided by and the opinions of the agency officials who participated in our survey. We did not independently verify the responses to our questions. We received responses from all of the CFO Act agencies. One of the 24 agencies returned but did not complete the survey because it does not have grant- making authority, and, therefore, has no experience with single audits. As a result, our survey results are based on responses from 23 agencies. We conducted our work from July 2001 through December 2001, in accordance with generally accepted government auditing standards. We discussed a draft of this report with representatives from OMB and have incorporated their comments and views where appropriate. According to OMB, federal awards for fiscal year 2001 totaled about $325 billion of the $1.8 trillion federal budget. The Departments of Agriculture, Education, Health and Human Services, Housing and Urban Development, and Transportation were responsible for managing about 86 percent of the federal awards in fiscal year 2001. The Single Audit Act, as amended, established the concept of the single audit to replace multiple grant audits with one audit of the recipient as a whole. As such, a single audit is an organizationwide audit that focuses on the recipient’s internal controls and compliance with laws and regulations governing federal awards and should be viewed as a tool that raises relevant or pertinent questions rather than as a document that answers all questions. Federal awards include grants, loans, loan guarantees, property, cooperative agreements, interest subsidies, insurance, food commodities, and direct appropriations and federal cost reimbursement contracts. The objectives of the Single Audit Act, as amended, are to promote sound financial management, including effective internal controls, with respect to federal awards administered by nonfederal entities; establish uniform requirements for audits of federal awards administered by nonfederal entities; promote the efficient and effective use of audit resources; reduce burdens on state and local governments, Indian tribes, and ensure that federal departments and agencies, to the maximum extent practicable, rely upon and use audit work done pursuant to the act. Recipients of federal awards who expend $300,000 or more in a year are required to comply with the Single Audit Act’s requirements. In general, they must (1) maintain internal control over federal programs, (2) comply with laws, regulations, and the provisions of contracts or grant agreements, (3) prepare appropriate financial statements, including the Schedule of Expenditures of Federal Awards, (4) ensure that the required audits are properly performed and submitted when due, and (5) follow up and take corrective actions on audit findings. OMB Circular A-133 establishes policies for federal agency use in implementing the Single Audit Act, as amended, and provides an administrative foundation for consistent and uniform audit requirements for nonfederal entities that administer federal awards. It details federal responsibilities with respect to informing grantees of their responsibilities under the act. A significant part of OMB Circular A-133 is the Compliance Supplement. This document serves as a source of information to aid auditors in understanding federal program objectives, procedures, and compliance requirements relevant to the audit, and it identifies audit objectives and suggested procedures for auditors’ use in determining compliance with the requirements. For example, it includes guidance on audit procedures applicable to 14 areas including allowable activities, allowable costs, cash management, eligibility, and reporting. (Appendix III lists and briefly describes the 14 areas.) Organizations that must comply with the Single Audit Act, as amended, are required to submit a reporting package to the FAC. The FAC serves as the central collection point, repository, and distribution center for single audit reports. Its primary functions are to receive the SF-SAC Form—a data collection form that contains summary information on the auditor, auditee and its federal programs, and audit results—and the audit report from the auditee, archive copies of the SF-SAC Form and audit report, forward a copy of the audit report to each federal awarding agency that has provided direct funding to the auditee when the report identifies a finding relating to that agency’s awards, and maintain an electronic database that is accessible through the Internet. In our June 1994 report, Single Audit: Refinements Can Improve Usefulness (GAO/AIMD-94-133), nearly two-thirds of the program managers we interviewed said that a database of single audit information would be a significant help in comparing information about entities operating their programs. Eighty percent of the managers said they would like to use the database to identify all entities operating their programs that had serious internal control or noncompliance problems disclosed in single audit reports. The Single Audit Act Amendments of 1996 led to the establishment of an automated database of single audit information—the FAC database. OMB Circular A-133 requires all entities that must submit single audit reports to the FAC to prepare and submit a data collection form (SF-SAC Form) with the audit report. The FAC uses this form as the source of the information for its automated, Internet-accessibledatabase of information contained in single audit reports. The database contains about 4 years of information on over 30,000 annual single audit reports. The various data query options available provide potential users, including program managers, auditors, and other interested parties, with significant amounts of readily available information on grant recipient financial management and internal control systems and on compliance with federal laws and regulations. Our survey results indicated that the CFO Act agencies have generally developed processes and assigned responsibilities to meet their requirements under the Single Audit Act, as amended. The CFO, IG, and program offices perform these activities either individually or in coordination with each other. Federal agencies indicated that they use single audit results for many purposes. The most common reported use was as a tool to monitor auditee compliance with administrative and program requirements and to monitor the adequacy of internal controls. Although agencies have identified many uses for the single audit results, our survey results show that they are generally not using the FAC automated database to obtain summary information on the audit results or the entities that are receiving funds under their programs. Rather, they reported developing their own systems or methods to obtain information from the reports. According to our survey results, agency program offices are primarily responsible for ensuring the application of the provisions set forth in OMB Circular A-133. For example, in completing the survey, program office officials indicated that they (1) ensure that award recipients are given information that describes the federal award, (2) advise recipients of other applicable award requirements, (3) advise recipients of the requirement to obtain a single audit when they expend $300,000 or more in federal awards in a year, (4) ensure that single audits are completed and the reports are received in a timely manner, and (5) follow up on issues identified in the reports that require corrective action. Specifically, 20 agency program offices responded that they ensure recipients are given the information necessary to describe the federal award and advise recipients of other applicable award information, 19 responded that they advise recipients of the requirements to obtain a single audit when they expend $300,000 or more in federal awards in a year, 19 responded that they follow up on issues that are identified in the reports that require corrective action, 17 responded that they provide information to auditors about the federal 10 responded that they ensure that single audits are completed and the reports are received in a timely manner. Additionally, at some agencies more than one office responded that they are responsible for the application of the provisions of OMB Circular A-133. The FAC distributes single audit reports to each federal awarding agency that has provided direct funding and for which the report identifies an audit finding related to an award managed by that agency. Based on our survey, receipt of single audit reports from the FAC and distribution of the reports to the applicable agency office is predominately the responsibility of the OIG. Our results show that 18 OIGs responded that they receive the single audit reports directly from the FAC and that they distribute them to applicable agency offices. Audits provide important information on recipient performance and are a critical control that agencies can use to help ensure that entities that receive federal funds use those funds in accordance with program rules and regulations. Agency OIGs play a key role in this area by performing quality control reviews (QCR) to ensure that the audit work performed complies with auditing standards. Our survey results show that 10 of the CFO Act agency OIGs performed 109 QCRs during fiscal year 2001, although this total may be overstated since OIGs occasionally perform joint QCRs and our survey did not capture information on the number of times this occurred. Although the number of QCRs performed is small compared to the approximately 30,000 single audits performed annually, several OIGs conducting QCRs have identified problems with the audit work performed. For example, 7 OIGs noted problems with the internal control and/or compliance testing performed by the auditors, and 3 OIGs reported problems relating to auditor compliance with generally accepted government auditing standards. Audit follow-up is an integral part of good management and is a shared responsibility of agency management officials and auditors. Corrective action taken by the recipient on audit findings and recommendations is essential to improving the effectiveness and efficiency of government operations. In addition, federal agencies need to ensure that recipients take timely and effective corrective action. OMB Circular A-133 notes that audit follow-up is the responsibility of the federal awarding agency. The Circular requires agencies to issue a management decision on audit findings within 6 months after receipt of the recipient’s audit report and to ensure that the recipient takes appropriate and timely corrective action. Analysis of our survey results indicates that both the IG and program offices have a role in the audit follow-up process. For example, 15 IG and 9 program offices responded that they are responsible for reviewing reports to verify that the report contains agency program-specific information. When single audit reports do not have enough information, both IG and program offices indicated that they follow up with either the recipient or the auditor. Thirteen IG and 14 program offices stated that they follow up with the recipient, and 13 IG and 10 program offices stated they follow up with the auditor. Program offices, on the other hand, are responsible for evaluating the corrective action plans filed by recipients to determine whether they address the audit findings. Sixteen program offices responded that they are responsible for evaluating the corrective action plans to determine whether the issues are valid and what corrective action is necessary. Furthermore, the program offices at 10 agencies stated that they rely on subsequent audits to determine whether corrective actions have been taken. At 22 of the agencies, officials in at least one of the CFO, IG, and/or program offices responded that they use single audits as a tool to monitor compliance with administrative and program requirements addressed in the OMB Circular A-133 Compliance Supplement and to monitor the adequacy of internal controls. Six agencies reported that the CFO, IG, and program offices all perform this function. Six agencies reported that some combination of CFO, IG, and program offices perform this function. Ten agencies reported that one office performs the function, and that office varies across the 10 agencies. The next most frequent uses reported were for identifying leads for additional audits (18 agencies) and as a preaward check for determining how recipients managed previous awards (14 agencies). Further, they reported that the single audit reports are used in preaward checks to identify findings that may affect the program area of operations and identify questioned or unallowable costs incurred by the recipient. agencies reported that these checks may affect future awards. Additionally, the survey results indicated that between 6 and 12 agencies use single audit results to identify leads for program office site visits (12 agencies), as support for closeout of the award (12 agencies), to hold agency program offices accountable for administrative and program compliance (12 agencies), to support the agency’s financial statements (10 agencies), and as a source of program information for the agency’s performance plan or annual accountability report (6 agencies). As can be seen, agencies report using single audits for a number of purposes. However, between 1 and 8 agencies indicated that, for several reasons, they did not use the reports for some or all of these purposes. When asked why they did not use single audit reports, several agencies noted that their programs were too small to be covered in the scope of an audit performed under the Single Audit Act. For example, the Single Audit Act requires auditors to use combined expenditure and risk-based criteria to determine which programs to include in the scope of a single audit. Since the expenditure portion of the criteria identifies awards with large- dollar expenditures, agencies whose programs do not meet this criteria are less likely to have their programs audited during a single audit. Additionally, agencies said the single audit reports did not provide relevant information for specific purposes such as support for the agency financial statements or holding federal program offices accountable for administrative and program compliance. Other reasons provided for not using single audit reports include limited staff resources and competing priorities. Our survey results indicate that 11 agencies routinely use the FAC database and that usage is distributed among the CFO, IG, and program offices. For example, the 11 agencies indicated that they use the database to identify recipients that have incurred questioned costs, have made improper payments, or both. In addition, 8 agencies noted that they use the database to determine whether large-dollar or complex programs have significant findings such as adverse opinions on recipient compliance with program laws and regulations. Survey respondents also indicated that they use the FAC database to perform other tasks, such as tracking the status of audit-finding resolution, determining whether the recipient has filed its single audit report, a source for audit leads, identifying trends between recipients, and verifying the accuracy of the Schedule of Expenditures of Federal Awards. Those agencies that do not use the database reported that they rely on the FAC to send them the single audit reports and that they review the hard copy reports to obtain information on the agency’s programs instead of the database. In discussions with personnel at 4 agencies, we learned that they were unfamiliar with the FAC database and how it could be used. These officials did express interest in using the database and inquired about the availability of training. We are sending copies of this report to the ranking minority member, Subcommittee on Government Efficiency, Financial Management and Intergovernmental Relations, House Committee on Government Reform; the chairman and ranking minority member, Senate Committee on Appropriations; the chairman and ranking minority member, House Committee on Appropriations; the chairman and ranking minority member, Senate Committee on Governmental Affairs; the chairman and ranking minority member, House Committee on Government Reform; the chairman and ranking minority member, Senate Budget Committee; and the chairman and ranking minority member, House Budget Committee. We are also sending copies of this report to the director of the Office of Management and Budget and the agency CFOs and IGs. Copies of this report will be made available to others upon request. This report will also be available on GAO’s home page (http://www.gao.gov). Please call me at (213) 830-1065 or Tom Broderick, Assistant Director, at (202) 512-8705 if you or your staff have any questions about the information in this report. Key contributors to this report were Cary Chappell, Mary Ellen Chervenic, Valerie Freeman, Stuart Kaufman, and Gloria Hernandez- Saunders. According to Office of Management and Budget (OMB) figures, federal awards for fiscal year 2001 totaled $325 billion of the $1.8 trillion budget. This assistance includes grants, loans, loan guarantees, property, cooperative agreements, interest subsidies, insurance, food commodities, and direct appropriations and federal cost reimbursement contracts. (Fiscal Year 2001) Fiscal Year 2001 Grants by Agency to State and Local Governments According to OMB figures, the Department of Health and Human Services is responsible for managing 54 percent of the $325 billion in federal awards provided during fiscal year 2001. The Departments of Transportation, Housing and Urban Development, Education, and Agriculture are responsible for managing an additional 32 percent of federal awards. (in billions) Top Ten Programs for Fiscal Year 2001 According to OMB figures, the Department of Health and Human Services managed 5 of the top 10 federal awards programs in fiscal year 2001. These programs are Medicaid, Temporary Assistance for Needy Families, Head Start, Foster Care, and Child Support Enforcement. (in billions) Briefing Section II—Single Audit Processes and Awarding Agency Responsibilities Organizations Performing Selected A-133 Responsibilities According to our survey results, agency program offices are primarily responsible for ensuring the application of the provisions set forth in OMB Circular A-133, Audits of States, Local Governments, and Non-Profit Organizations. For example, 20 agency program offices responded that they ensure that recipients are given the information necessary to describe the federal award and advise recipients of other applicable award information. Nineteen responded that they advise recipients of the requirement to obtain a single audit when they expend $300,000 or more in federal awards in a year, 19 responded that they follow up on issues that are identified in the reports that require corrective action, 17 responded that they provide information to auditors about the federal program, and 10 responded that they ensure that single audits are completed and that the reports are received in a timely manner. Additionally, at some agencies more than one office responded that they are responsible for the application of the provisions of OMB Circular A-133. For example, the chief financial officer (CFO) and inspector general (IG) offices are involved in providing information to auditors performing single audits and in addressing issues that require corrective action. While the majority of agencies hold program offices responsible for such tasks, 3 agencies established a separate function within the CFO’s office to ensure proper oversight of federal awards. While these agencies award relatively small amounts of federal money, they felt it was important to maintain proper oversight. Agencies responded that the primary way they promote compliance with OMB Circular A-133 is by mandating it in regulations, agency policy directives, or guidance on grants administration, and by including it in the grant award document. Provide recipients the information necessary to describe the federal award. Advise recipients of other applicable award requirements and provide information as requested. Advise recipients of the requirement to obtain a single audit when they expend $300,000 or more in federal awards in a year. Address issues that are identified in single audit reports that require corrective action. Provide information to auditors on agency programs as requested. Ensure single audits are completed and reports are received in a timely manner. NOTE: Rows do not add across to total agencies because we received responses from multiple offices within an agency. Briefing Section II—Single Audit Processes and Awarding Agency Responsibilities Organizations Performing Selected A-133 Responsibilities OMB Circular A-133 requires the Federal Audit Clearinghouse (FAC) to distribute single audit reports to the federal agencies. The FAC distributes reports to each federal agency that provides federal awards and for which the report identifies an audit finding related to an award managed by that agency. Based upon our survey, receipt of single audit reports from the FAC and distribution of the reports within the agency are predominately Office of Inspector General (OIG) responsibilities. Our results show that 18 OIGs receive the single audit reports directly from the FAC and distribute them to applicable agency offices. Under OMB Circular A-133, federal award recipients are assigned either a cognizant agency for audit or an oversight agency for audit, depending on the amount of federal awards they expend. The agency that provides the predominant amount of direct funding to a recipient is responsible for carrying out the functions of the cognizant or oversight agency, unless OMB makes a specific cognizant agency for audit assignment. The cognizant agency for audit is required to conduct quality control reviews (QCR) of selected audits made by nonfederal auditors. Receive single audit reports from the FAC. Distribute single audit reports to the applicable agency office. Obtain or conduct QCRs of selected audits made by nonfederal auditors, and provide the results, when appropriate, to other interested organizations. NOTE: Rows do not add across to total agencies because we received responses from multiple offices within an agency. Briefing Section II—Single Audit Processes and Awarding Agency Responsibilities Analysis of our survey results indicates that both the IG and program offices are responsible for the audit follow-up process. For example, 15 IG and 9 program offices responded that they are responsible for reviewing reports to verify that the report contains agency program-specific information. When single audit reports do not have enough information, both IG and program offices follow up with either the recipients or the auditor. Thirteen IG and 14 program offices stated they follow up with the recipient, and 13 IG and 10 program offices stated that they follow up with the auditor. Program offices, on the other hand, are responsible for evaluating the corrective action plans filed by recipients to determine whether they address the audit findings. As shown on the accompanying slide, 16 program offices responded that they are responsible for evaluating the corrective action plans to determine their validity. Furthermore, the program offices at 10 agencies stated that they rely on subsequent audits to determine if corrective actions have been taken. To facilitate follow-up procedures, automated or manual audit tracking systems are necessary. The results of our interviews show that most agencies use a tracking system to track single audit findings. NOTE: Rows do not add across to total agencies because we received responses from multiple offices within an agency. Briefing Section III—How Agencies Use Single Audits Agency Uses of Single Audits Review of the surveys indicated that one or more offices at 22 agencies use single audits as a tool to monitor compliance with administrative and program requirements and to monitor the adequacy of recipients’ compliance with internal controls. Five agencies reported that the CFO, IG, and program offices all perform these functions. Six agencies reported that some combination of CFO, IG, and program offices perform them and 11 agencies reported that one office performs this function. Our results also indicate that many agency personnel read all single audit reports they receive to identify noncompliance with program requirements or inadequacy of internal controls. NOTE: Rows do not add across to total agencies because we received responses from multiple offices within an agency. Briefing Section III—How Agencies Use Single Audits Agency Uses of Single Audits Single-audit-report leads for follow-on work can come from a review of the entity’s financial statements or the auditor’s findings. Further, while single audit report findings are supposed to be corrected by the entities, some findings may indicate problems that need further investigation to be fully understood and effectively resolved. Thus, information from single audit reports may indicate the possible need for follow-on audits or additional review and analysis by program officials or both. Eighteen agencies responded that they use single audits as a source of leads for additional audits. Fourteen agencies said they use single audits as a preaward check to determine how the recipient managed previous awards. These agencies responded that single audit reports are used in preaward checks to identify findings that may affect the program area of operations, questioned or unallowable costs incurred by the recipient, and findings that may affect future awards. Program officials at 12 agencies responded that single audits are used as a source of leads to select recipients for program site visits. Twelve agencies said they used single audit reports as support for award closeout. Briefing Section III—How Agencies Use Single Audits Agency Uses of Single Audits Survey results indicate that 12 of the 24 CFO agencies use single audit results to hold agency program offices accountable for administrative and program compliance. Ten agencies responded that they use single audit reports to support the agency’s financial statements. Six agencies responded that they used the results of single audits as a source of program information for the agency’s performance plan or annual accountability report. Briefing Section III—How Agencies Use Single Audits Why Agencies Do Not Use Single Audit Reports As indicated in the preceding slides, agencies use single audits for a number of purposes. However, between 1 and 8 agencies indicated that, for several reasons, they did not use the reports for these purposes. When asked why they did not use single audit reports for a particular purpose, between 4 and 8 agencies noted that their programs were too small to be covered by the Single Audit Act. For example, the Single Audit Act requires auditors to use combined expenditure and risk-based criteria to determine which programs to include in the scope of a single audit. Since the expenditure portion of the criteria identifies awards with large-dollar expenditures, agencies whose programs do not meet this criteria are less likely to have their programs audited during a single audit. Additionally, between 2 and 8 agencies said that the single audit reports did not provide relevant information for specific uses. Other reasons provided for not using single audit reports included limited staff resources (2 to 5 agencies), and competing priorities (1 to 3 agencies). Briefing Section IV—Use of Federal Audit Clearinghouse Database Uses of Federal Audit Clearinghouse Database Our survey results indicate that 11 agencies routinely use the FAC database and that usage is distributed among the CFO, IG, and program offices. For example, 11 agencies indicated that they use the database to identify recipients that have incurred questioned costs, have made improper payments, or both. In addition, 8 agencies noted that they use the database to determine whether large-dollar or complex programs have significant findings such as adverse opinions on recipient compliance with program laws and regulations. Survey respondents also indicated that they use the FAC database to perform other tasks, such as tracking the status of audit-finding resolution, determining whether the recipient has filed its single audit report, a source for audit leads, identifying trends between recipients, and verifying the accuracy of the Schedule of Expenditures of Federal Awards. Those agencies that do not use the database rely on the FAC to send them the single audit reports and review the reports to obtain information on the agency’s programs instead of using the database to obtain such information. In discussions with agency personnel at four agencies, we learned that they were unfamiliar with the FAC and how it could be used. These officials did express interest in using the database and inquired about the availability of training. to determine whether multiple agency programs have similar audit issues called “finding categories” to identify recipients that have incurred questioned costs, made improper payments, or both to determine how many recipients have recurring findings to determine whether large-dollar or complex programs have significant findings such as adverse opinions on recipient compliance with program laws and regulations to study the findings of subrecipients (A subrecipient is a nonfederal entity that expends federal awards received from a pass-through entity to carry out federal programs.) NOTE: Rows do not add across to total agencies because we received responses from multiple offices within an agency. Presented below are the 14 types of compliance requirements that the auditor shall consider in every audit conducted under OMB Circular A-133. Activities allowed or unallowed are unique to each federal program and are found in the laws and regulations and the provisions of the contract or grant agreements pertaining to the program. OMB Circulars A-87, Cost Principles for State, Local and Indian Tribal Governments; A-21, Cost Principles for Educational Institutions; and A-122, Cost Principles for Non-Profit Organizations prescribe the cost accounting policies associated with the administration of federal awards managed by states, local governments, Indian tribal governments, educational institutions, and nonprofit organizations. Requires that recipients follow procedures to minimize the time elapsing between the transfer of funds from the U.S. Treasury and payment by the recipient. Requires that all laborers and mechanics employed to work on construction projects over $2,000 financed by federal assistance funds be paid prevailing wage rates. The specific requirements for eligibility are unique to each federal program and are found in the laws and regulations and the provisions of the contract or grant agreements pertaining to the program. Equipment and real property management Requires real property acquired by nonfederal entities with federal award funds be used for the originally authorized purpose and may not be disposed of without prior consent of the awarding agency. The specific requirements for matching, level of effort, and earmarking are unique to each federal program and are found in the laws and regulations and the provisions of the contract or grant agreements pertaining to the program. Where applicable, federal awards may specify a time period during which the nonfederal entity may use the federal funds. A nonfederal entity may charge to the award only costs resulting from obligations incurred during the funding period and any preaward costs authorized by the awarding agency. Nonfederal entities are prohibited from contracting with or making subawards to parties that are suspended or debarred from contracting with the federal government. Requires that program income be deducted from program outlays unless otherwise specified in agency regulations or the terms and conditions of the award. Requires that the provisions specified in the Uniform Relocation Assistance and Real Property Acquisition Policies Act of 1970, as amended, are adhered to when persons are displaced from their homes, businesses, or farms by federally assisted programs. Requires that each recipient report program outlays and program income on a cash or accrual basis, as prescribed by the awarding agency. Requires that pass-through entities monitor subrecipients. Monitoring activities may include reviewing reports submitted by subrecipients, performing site visits, reviewing the subrecipients single audit results, and evaluating audit findings and the corrective action plan. Special tests and provisions are unique to each federal program and are found in the laws and regulations and the provisions of the contract or grant agreements pertaining to the program. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full-text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO E-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily e-mail alert for newly released products” under the GAO Reports heading. Web site: www.gao.gov/fraudnet/fraudnet.htm, E-mail: [email protected], or 1-800-424-5454 or (202) 512-7470 (automated answering system).
The federal government awards $300 billion to state and local governments and nonprofit groups each year. The Single Audit Act promotes sound financial management, including effective internal controls, over these federal awards. Before the act, government relied on audits of individual grants to determine if the money was spent properly. The act replaced these grant audits with a single audit--one audit of an entity as a whole. GAO surveyed the 24 federal agencies subject to the Chief Financial Officers (CFO) Act and found that they have developed processes and assigned responsibilities to meet the requirements of the Single Audit Act. Agencies reported that they are using single audits to monitor compliance with administrative and programs requirements and to determine the adequacy of recipients' internal controls. One or more offices at 22 of the 24 agencies used single audits to monitor compliance with administrative and program requirements in the Circular A-133 Compliance Statement and to monitor recipients' compliance with internal controls. Eleven agencies reported that they routinely use the Federal Audit Clearinghouse database to identify recipients that incurred questionable costs or programs that have significant findings, to identify recipients with recurring findings, or to study subrecipient findings. Individuals at four agencies were unaware of the database or how to use it. Agencies that do not use the database rely on the Federal Audit Clearinghouse to send them the single audit report, which they review for information on their programs.
SBIR has four overarching purposes: to (1) use small businesses to meet federal R&D needs, (2) stimulate technological innovation, (3) increase commercialization of innovations derived from federal R&D efforts, and (4) encourage participation in technological innovation by small businesses owned by women and disadvantaged individuals. The SBIR program has a three-phase structure as follows: In Phase I, agencies award up to $150,000 for a period of about 6 to 9 months to small businesses to determine the scientific and technical merit and feasibility of ideas that appear to have commercial potential. In Phase II, small businesses whose Phase I projects demonstrate scientific and technical merit, in addition to commercial potential, may compete for awards of up to $1 million to continue the R&D for an additional period, normally not to exceed 2 years. Phase III is for small businesses to pursue commercialization objectives resulting from the Phase I and II R&D activities, where appropriate. Phase III is the period in which Phase II innovation moves from the laboratory to the marketplace. SBIR Phase III work completes an effort made under prior SBIR phases, but it is funded by sources other than the SBIR program. To commercialize their products, small businesses are expected to raise additional funds from private investors, the capital markets, or from non-SBIR sources within the agency that made the initial award. According to SBA documents, STTR’s purpose is to stimulate a partnership of ideas and technologies between innovative small businesses and research institutions through federally funded R&D.program provides funding for research proposals that are developed and executed cooperatively between small businesses and research institutions. Like the SBIR program, the STTR program is structured in three phases as follows: Phase I aims to establish the technical merit, feasibility, and commercial potential of the proposed R&D efforts and to determine the quality of performance of the small businesses. STTR Phase I awards generally do not exceed $150,000 for 1 year. Phase II funding is based on the results achieved in Phase I and the scientific and technical merit and commercial potential of the Phase II project proposed. STTR Phase II awards generally do not exceed $1 million total costs for 2 years. Phase III is for small businesses to pursue commercialization of research or technology resulting from the Phase I and II R&D activities, and it completes an effort made under the prior STTR phases but is funded by sources other than the STTR program. According to SBA, Phase III work can involve commercial application of R&D financed by nonfederal capital, including STTR products or services intended for use by the federal government and continuation of R&D that has been competitively selected in Phases I and II, according to the 2012 SBA policy directive. As noted, federal agencies with a budget of more than $100 million for extramural R&D are required to have an SBIR program, while federal agencies with extramural R&D budgets that exceed $1 billion annually are required to have an STTR program. Generally, the extramural R&D budget is defined as the sum of the total obligations for R&D minus amounts to be obligated for intramural R&D, that is, R&D conducted by employees of a federal agency in or through government-owned, government-operated facilities. In determining their extramural R&D budget, agencies have authority to exclude certain R&D programs from the extramural R&D base used for calculating SBIR and STTR spending requirements, such as facilities and equipment used for R&D and certain intelligence activities. For example, under the Small Business Act, DOE must exclude amounts obligated for its naval reactor program from its extramural R&D budget. Likewise, under the act, DOD excludes programs carried out by certain elements of the intelligence community. In addition, DOT must exclude funds obligated for the Federal Highway Administration State Planning and Research program from its extramural R&D budget. In fiscal year 2011, the 11 participating agencies reported spending a total of $2.2 billion for SBIR, and the 5 participating agencies reported spending a total of $251 million for STTR, with DOD spending the most on these programs––$1.1 billion and $121 million, respectively. DOD’s reported spending constituted 48.8 percent of total SBIR spending and 48.2 percent of total STTR spending in fiscal year 2011. According to SBA documents, the agency’s role is to serve as the oversight and coordinating agency for the SBIR and STTR programs and is to direct and assist the agencies’ implementation of the programs, review their progress, collect agency reports, analyze the information in the agencies’ methodology reports, and report annually to Congress on the programs. In this role, SBA issued SBIR and STTR policy directives in September 2002 and December 2005, respectively, and updated them in August 2012; these directives provide agencies with detailed guidance on implementation of the SBIR and STTR programs. Data reported by the participating agencies to SBA for fiscal years 2006 to 2011 indicate that most of the agencies have not consistently complied with the mandated spending requirements for SBIR and STTR. In calculating their spending requirements, some participating agencies made improper exclusions and used differing methodologies. Our analysis of data the participating agencies reported to SBA indicates that, from fiscal years 2006 to 2011, most agencies did not consistently comply with mandated spending requirements. Specifically, 8 of the 11 agencies did not consistently meet annual spending requirements for SBIR. Data from 3 of the agencies—DHS, Education, and HHS—indicate that they met their spending requirements for all 6 years. For STTR, 4 of 5 agencies did not consistently meet annual spending requirements. Data from 1 agency—HHS—indicate that it met its STTR spending requirements for all 6 years. Figure 1 shows the number of years that each agency met its annual SBIR and STTR spending requirements, based on the information submitted to SBA in each agency’s annual reports.spending on these programs. See appendix I for further details on each agency’s reported Evaluation budget of the United States Special Operations Command subunit from its extramural R&D budget because the budget is less than the $1 billion required for participating in the STTR program. HHS reported excluding the extramural R&D budgets of the Centers for Disease Control and Prevention and the Food and Drug Administration because these subunit budgets are less than the $1 billion required for participating in the STTR program. In another example, a third agency, DOT reported excluding the Federal Aviation Administration’s extramural R&D budget, which is well in excess of $100 million annually, from the SBIR program budget calculation. We asked DOT and FAA to provide the legal authority for this exclusion, but they did not supply this information. Improperly excluding subunits reduced calculations of three agencies’ respective extramural R&D and, in turn, the agencies’ spending requirements for SBIR and STTR. Over fiscal years 2006 to 2011, these improper exclusions resulted in a $7.7 million reduction to DOD’s STTR spending requirement, a $34.7 million reduction to DOT’s SBIR requirement, and HHS’ exclusions resulted in a $6.1 million reduction to its STTR spending requirement. Officials at DOD said they changed the agency’s policy on exclusions as of fiscal year 2013 and that the new policy, which will not allow these improper exclusions, is currently being implemented. DOT provided no further information on its exclusion. HHS met its overall agency spending requirement even with the improper exclusions, according to the data in HHS’ annual reports to SBA. In addition to identifying improper exclusions, we found that, when appropriations were received late in the year, agencies used differing methodologies to calculate their spending requirements, making it difficult to determine whether agencies’ calculations were correct. For example, some agency program managers told us that, when appropriations were received late in the year, they used their prior year actual spending to calculate their current year spending requirement, while others calculated their current year spending requirement using some other methodology. Specifically, program managers at the National Institute of Standards and Technology (NIST)—a subunit of Commerce—stated that program managers at NIST used the past year’s actual SBIR spending to calculate the current year’s requirement. In contrast, NASA calculated its SBIR spending requirement by determining what percentage its extramural R&D spending comprised of its total R&D spending in the prior year. NASA then applied this percentage to the current year’s total R&D budget to calculate the current year’s extramural budget, which it then used as the basis for calculating the SBIR and STTR spending requirements. Although SBA provided guidance in policy directives for participating agencies on calculating their spending requirements, neither SBA’s prior nor its current policy directives provide guidance to agencies on how to calculate such spending requirements when agency appropriations are delayed. Without such guidance, agencies will likely continue to calculate spending requirements in differing ways. Agencies participating in the SBIR and STTR programs have not consistently complied with Small Business Act requirements for annually reporting a description of their methodologies for calculating their extramural R&D budgets to SBA. In addition, SBA has not consistently complied with the act’s requirements for annually reporting to Congress, including reporting on SBA’s analysis of the agencies’ methodologies for calculating their extramural R&D budgets. With the exception of NASA in certain years, agencies did not submit their methodology reports to SBA within the time frame required by the Small Business Act for fiscal years 2006 to 2011 for the SBIR and STTR programs. The act requires that agencies report to SBA their methodologies for calculating their extramural budgets within 4 months after the date of enactment of their respective appropriations acts. However, most participating agencies documented their methodologies for calculating their extramural R&D budgets for these fiscal years and submitted them to SBA after the close of the fiscal year with their annual reports, but three agencies—USDA, Education, and DOT—did not provide a methodology report for 1 fiscal year during this period. USDA did not submit a report on its fiscal year 2007 methodology because agency officials said it was identical to prior years. Officials from Education and DOT said they typically submitted their methodology reports with their annual reports. However, they told us that for fiscal year 2011 they did not submit their methodology reports to SBA on time because that was the first year that agencies were required to submit their annual reports to SBA’s automated system and there was not a place in SBA’s system to submit methodology reports. SBA officials said that they nonetheless expected agencies to submit their methodology reports and that there are several methods to transmit this information, such as by memorandum or e-mail. Education officials later told us they submitted their 2011 methodology report to SBA in January 2013. SBA officials said that they have not held the agencies to the act’s deadline for submitting methodology reports, in part because continuing resolutions enacting final appropriations have sometimes not been passed until the middle of the fiscal year. This timing for appropriations has pushed the required reporting date of the methodology report—due 4 months after appropriations are set—until late in the fiscal year. SBA officials said this has made it more convenient for participating agencies to submit the methodology report with the annual report. Further, SBA officials said the agency uses the methodology reports for their annual reports to Congress. By not having the methodology reports earlier in the year as specified by law, however, SBA does not have an opportunity to promptly analyze these methodologies and provide the agencies with timely feedback to assist agencies in accurately calculating their spending requirements. SBA officials said they have provided feedback orally and through e-mails to the participating agencies about their methodology reports, but many agency program managers said that SBA has provided little feedback. By not providing such feedback, SBA is forgoing the opportunity to assist agencies in correctly calculating their program spending requirements and helping to ensure that they meet mandated requirements. More significantly, the majority of the agencies did not include information consistent with a provision in SBA’s SBIR and STTR policy directives that specifies the act’s requirement for a methodology report from each agency. Specifically, the SBA policy directives state that the methodology report must include an itemization of each R&D program excluded from the calculation of the agency’s extramural budget and a brief explanation of why it is excluded. In our review, we found that two of the participating agencies—EPA and HHS—complied fully with the requirements because they included in their methodology reports an itemization of the programs excluded from the calculation of their extramural R&D budget and an explanation of why the programs were excluded for all 6 fiscal years in our review; and six agencies—DHS, DOD, DOE, DOT, NASA and NSF—did not fully meet these requirements for the 6 fiscal years in our review because their methodology reports either identified some excluded programs but not others that we identified or the reports omitted explanations for exclusions. As a result, agencies submitted different information, including different levels of detail on their methodologies. For example, some agencies provided itemization of each R&D program excluded, including dollar amounts and statutory authority, as part of the calculation of the agency’s extramural budget and a brief explanation of why it is excluded, while other agencies only provided a brief explanation. SBA officials told us that most participating agencies’ methodology reports have changed little from year to year, so SBA does not raise questions about details of their methodologies. In the absence of guidance from SBA on the format in which the methodology reports are to be presented, DOD developed a methodology template that guides the calculation of DOD’s extramural R&D budget and in turn the programs’ spending requirements, including the identification of any R&D programs excluded from the basis for calculating their spending requirements and a brief explanation of why they are excluded. Without guidance on the format of methodology reports, participating agencies are likely to continue to provide SBA with broad, incomplete, or inconsistent information about their methodologies and spending requirements. Furthermore, without having more consistent information from agencies, it is difficult for SBA to comprehensively analyze the methodologies and determine whether agencies are accurately calculating their spending requirements. In addition, for the agency annual report requirement, SBA has provided a template that asks agencies for the extramural R&D budget base used to calculate the SBIR or STTR spending requirements, but it does not ask for the specific calculations used to derive that budget base. Unlike the requirement set by law and SBA policy directives for methodology reports to include a description of their methodologies for calculating extramural R&D budgets, information on actual calculations, such as identifying exclusions, is not required for agency annual reports to SBA. However, because annual reports show the results of the previously described methodology including such information in the annual reports is important. By not requesting that agencies include calculations used to derive the budget base in its template, SBA has been receiving incomplete information from the participating agencies, which limits the usefulness of data the agency reports to Congress. SBA officials told us that participating agencies’ calculations of spending requirements have changed little from year to year, and so SBA does not raise questions about the calculations. SBA likewise does not request that agencies include information in their annual reports that would enable SBA to conduct better oversight, including information on (1) whether agencies met the mandated spending requirements, (2) the reasons for any noncompliance with these requirements, and (3) the agencies’ plans for meeting any noncompliance in future years. By including this information, SBA could more fully oversee the programs and provide more complete information to Congress. SBA has not consistently complied with the requirement for reporting its analysis of the agencies’ methodologies in its annual report to Congress, as required by the Small Business Act. Over the 6 years covered in our review, SBA reported to Congress for 3 of those, fiscal years 2006, 2007, and 2008. Furthermore, these reports contained limited analyses of the agencies’ methodologies, and some of the analyses were inaccurate. For example, SBA’s analysis was limited to a table attached to the annual report to Congress that often did not include information on particular agencies; SBA provided no other documentation showing the results of its analysis of the agency methodology reports. For example, in its fiscal year 2006 annual report, SBA concluded that all of the participating agencies complied with program requirements in calculating their extramural R&D budgets but did not present the basis for its conclusion. As noted earlier, our review showed that some participating agencies improperly excluded some extramural programs from their funding base calculation and did not consistently comply with SBA’s instructions in its policy directive to itemize the exclusions in their calculation of the R&D extramural budget for either the SBIR or STTR program. Without more comprehensive analysis and accurate information on participating agencies in SBA’s annual report, Congress does not have information on the extent to which agencies are reporting what is required by law or if they are under spending by, for example, taking improper exclusions. Moreover, SBA officials said they delayed submitting their annual reports to Congress for fiscal years 2009, 2010, and 2011 to reconcile significant inconsistencies the agency found between spending data submitted by participating agencies in their annual reports to SBA and data routinely collected in SBA’s automated system from agencies and awardees on SBIR and STTR awards made during the fiscal year. In commenting on a draft of this report in August 2013, SBA program officials said that the three reports had been consolidated into one report that was being reviewed by the Office of Management and Budget. The agency plans to submit the reports to Congress in 2013, making the data available to Congress on the programs 2 to 4 years after the end of the fiscal year. Changing the methodology to calculate the SBIR and STTR spending requirements based on each agency’s total R&D budget instead of each agency’s extramural R&D budget would increase the amount of each agency’s spending requirement for the programs, some much more than others, depending on the assumptions about how the funding base change is implemented. Also, it would increase the number of agencies that would be required to participate in the programs. Some agencies reported that such a change could have effects on their R&D programs and may create challenges. If the SBIR and STTR spending requirements under current law were applied to an agency’s total R&D budget rather than to the extramural R&D budget, the spending requirements for each agency would increase because their extramural R&D budget is a part of, and therefore smaller, than their total R&D budget. Table 3 shows a comparison of the agencies’ spending requirements for the SBIR and STTR programs in fiscal year 2011 under the current law, based on an agency’s extramural R&D budget, and this alternative methodology. As shown in table 3, some agencies’ spending requirements would increase more than others. This is due to differences in the relative proportions of the extramural and intramural R&D budgets among agencies. Examples are as follows: NSF used almost its entire R&D budget for extramural R&D in fiscal year 2011 and was required to spend about $124 million on its SBIR program in that year. Its spending requirement would have increased slightly to $127 million—about a 3 percent increase—if the spending requirement were based on the total R&D budget instead of the extramural R&D budget. For NSF’s STTR program, the spending requirement in fiscal year 2011 would have increased about 3 percent—the same percentage increase as for SBIR. Commerce used a relatively small percentage of its total R&D budget—about 25 percent—for extramural R&D in fiscal year 2011, and its spending requirement for SBIR would have more than quadrupled from about $6 million to $26 million—333 percent—in fiscal year 2011 if the calculation methodology changed. While Commerce does not participate in the STTR program currently, it would have to participate in the STTR program if the calculation methodology changed. Its spending requirement would have been $3 million rather than zero. To put these figures in perspective, if the funding percentage in law were applied to the total R&D budget instead of the extramural budget, NSF’s spending on SBIR in 2011 would have increased to about 2.6 percent of its extramural R&D budget, while Commerce’s spending would have increased to about 10.7 percent of its extramural R&D budget. For the STTR program, if the funding percentage in the law were applied to the total R&D budget instead of the extramural budget, NSF’s spending on STTR would have increased to about 2.7 percent of the extramural R&D budget, and Commerce’s spending would have increased from zero. In addition to the changes in the dollar amount of funds available for STTR and SBIR spending requirements, agencies said that changing the base for calculation of budgets for these programs would affect agency operations, depending on assumptions about how the funding base change is implemented. For example, changing the base would increase SBIR and STTR budgets and could result in reductions in certain types of intramural R&D with corresponding reductions in full-time equivalent staffing of these programs. In addition, some agency officials said there were potential changes in the content of the agency’s extramural R&D effort because of changes in the types of businesses that receive grants and contracts. In addition to applying the same percentages as used under current law to the total R&D budget, we analyzed the potential changes to spending requirements using two other alternative scenarios that apply different percentages to the total R&D budget. In these scenarios, some agencies would have experienced an increase in spending requirements, while others would have experienced a decrease. Appendix II contains a discussion of the alternate scenarios and the results of our analysis. Changing the calculation methodology to the total R&D budget would also increase the number of agencies that would be required to participate in the SBIR and STTR programs, assuming the same dollar thresholds for participating in the programs were applied to the total R&D budget rather than only the extramural R&D budget. For example, our analysis of the total R&D budget for all federal agencies for fiscal year 2011 indicates that For SBIR, two additional agencies—the Departments of Veterans Affairs (VA) and the Interior—would have been required to participate in fiscal year 2011 if total R&D budgets had been the criteria. These agencies reported total R&D budgets to SBA in excess of $100 million, which is the threshold for participation in the SBIR program. Adding these two agencies to participating SBIR agencies in fiscal year 2011 with the total R&D budget as the base, would have increased total federal SBIR spending by $48 million. For STTR, three additional agencies—Commerce, USDA, and VA— would also have been required to participate in the STTR program for fiscal year 2011 if total R&D budgets had been the criteria for meeting the threshold. These agencies reported total R&D budgets in excess of $1 billion, which is the threshold for participation in the STTR program. Adding these three agencies to STTR in fiscal year 2011 with the total R&D budget as the base would have increased total STTR spending by $13 million. Table 4 shows these agencies’ R&D budgets and what their SBIR and STTR spending requirements for fiscal year 2011 would have been if the spending methodology was changed to the total R&D budget. The participating agencies’ cost of administering the SBIR and STTR programs cannot be determined because the agencies neither collect that information nor have the systems to do so. Neither the authorizing legislation for the programs nor SBA guidance directs agencies to track and estimate all administrative costs, and neither the law nor SBA guidance defines these administrative costs. Estimates agencies provided indicated that the greatest amounts of administrative costs in fiscal year 2011 were for salaries and expenses, contract processing, outreach programs, technical assistance programs, support contracts, and other purposes. With the implementation in 2013 of a pilot program allowing agencies under certain conditions to use up to 3 percent of SBIR program funds for certain administrative costs, SBA expects to require agencies in the pilot program to track and report the spending of that 3 percent but not all of their administrative costs. The participating agencies have not comprehensively identified or tracked the cost of administering the SBIR and STTR programs for several reasons. Agency officials said that the costs cannot be determined because the agencies do not have the systems for collecting the data. Neither the authorizing legislation for the programs nor SBA guidance directs agencies to track and estimate administrative costs, and neither the law nor SBA guidance defines these administrative costs. We found that the amount of funds that participating agencies spent administering the SBIR and STTR programs—and the way the funds were used— cannot currently be estimated because the agencies have not identified or tracked many categories of program administrative costs. Agency officials said an important reason that administrative costs for the SBIR and STTR programs are not comprehensively identified or tracked is that using SBIR or STTR budgets to fund administrative costs has been generally prohibited. The Small Business Act generally prohibits agencies, except for DOD, from using any of their SBIR or STTR budgets to fund administrative costs of the programs, including the cost of salaries. Agencies reported that administrative costs of the programs were paid out of budget accounts other than the SBIR and STTR accounts. In addition, agency officials told us that the SBIR and STTR programs cut across many agency programs and disciplines and that the staff supporting the programs may work on a full-time or part-time basis, making identification and estimation of costs more difficult. For example, DOE reported the administrative costs of the SBIR and STTR program office only, but pointed out that the programs involved the part-time efforts of 70 to 100 additional people throughout DOE, including technical program managers, grants specialists and contracting officers, whose costs were not estimated. Similarly, HHS officials said it would be an exceptionally complex calculation to determine how much is currently spent on the administrative costs of the SBIR or STTR program because a large number of HHS staff work a fraction of their time on these programs. Officials in HHS’ National Institutes of Health (NIH), which accounts for the majority of HHS’ SBIR and STTR R&D programs, said there were a small number of full-time staff on these programs; rather, NIH officials said that most staff managing the programs do so as a collateral part of their duties and are not required to track the portion of their time spent on the programs. NASA reported that its budget estimate included a separate line for SBIR and STTR program management that covers personnel costs, travel, and procurement costs. NASA officials noted, however, that other costs to operate the programs are not included in this budget estimate, including the cost of NASA technical experts to review proposals and the cost of technical and contracting representatives interacting with small businesses. NASA officials did provide a rough estimate for the number of hours and full time equivalent staff spent by NASA technical reviewers and contracting personnel in a typical year as 25 to 38 full-time equivalent staff. The officials noted that the estimate does not include hours spent by others involved: mission directorate representatives, center chief technologists, contracting officers, support contractors, procurement support, and legal support. They also said that they did not have estimates for such categories as support contracts, outreach, and technical assistance. In response to our request for information on administrative costs for fiscal year 2011, 9 of the 11 participating agencies provided us with estimates of a portion of their costs to administer the SBIR and STTR programs in fiscal year 2011. Of the administrative costs estimated by these 9 agencies, the greatest amounts were for salaries and expenses, contract processing costs, outreach programs, technical assistance programs, and support contracts, and the “other” category. In some cases, officials for some agencies identified having costs in these categories or several others but provided no estimates of the amounts. The agency with the most administrative costs estimated in the most categories for 2011 was DOD, which provided estimates in 10 cost categories. Of the 11 participating agencies, Commerce and HHS did not provide estimated administrative costs or identify having administrative costs in any category. In response to our data requests and questions regarding fiscal year 2011, the 9 agencies provided some estimates, identified unestimated costs, or had no response in many of the costs categories for which we requested data. An overview of the information we obtained is contained in appendix III. As noted earlier, the National Defense Authorization Act for Fiscal Year 2012 created a pilot program beginning in fiscal year 2013 that would allow up to 3 percent of SBIR program funds to be used for administrative costs, the provision of outreach and technical assistance and contract processing, and other specified purposes. Agencies are otherwise generally not permitted to spend SBIR or STTR program funds on administrative costs. According to SBA’s policy directive, funding under this pilot is not intended to and must not replace current agency administrative funding in support of SBIR activities. Rather, funding under this pilot program is intended to support additional initiatives. SBA issued its guidance for the pilot program as part of its revised policy directive of August 2012 and requires agencies to submit annual work plans to SBA for approval on spending priorities, amounts, milestones, expected results, and performance measures before agencies can begin the pilot.guidance also directs agencies to report to them on the use of the funds allowed to be spent on administrative costs under the pilot program authority in their annual reports. However, agencies will not identify or track all of their administrative costs so SBA will not be able to report to Congress on total administrative costs. Of the 11 agencies participating in The SBA the SBIR program, 10 have submitted plans for the pilot program to SBA. SBA officials told us that, as of August 2013, all 10 of the agencies’ pilot plans had been approved for implementation in the current fiscal year. To help small businesses develop and commercialize innovative technologies, federal agencies have awarded billions of dollars to such businesses under the SBIR and STTR programs, which SBA oversees. In its role overseeing the programs, SBA has issued policy directives that provide agencies with guidance on the implementation of the programs. Agencies participating in the programs are required by law to spend a specific minimum portion of their extramural R&D budgets on these awards and to report certain information related to their spending to SBA. In turn, SBA is to review this information and report on it annually to Congress. However, participating agencies’ compliance with the programs’ spending requirements is unclear because some agencies improperly calculated their spending requirements and—in the absence of specific guidance from SBA when their appropriations were delayed— agencies used differing methodologies for calculating these requirements. Without guidance from SBA, agencies will likely continue to calculate spending requirements in differing ways, which will continue to raise questions about their compliance. In addition, most agencies’ reports to SBA about their methodologies for calculating their spending requirements did not contain key details, such as the identification of any R&D programs excluded from the basis for calculating their spending requirements and a brief explanation of why they are excluded, which is required both by law and SBA policy directives. Agencies also submitted differing information in these reports because SBA’s policy directives do not specify the format for the reports. Without guidance on the format of methodology reports, participating agencies are likely to continue to provide SBA with broad, incomplete, or inconsistent information about their methodologies and spending requirements. Furthermore, without more complete and consistent information from agencies, it is difficult for SBA to comprehensively analyze the methodologies and whether agencies are accurately calculating their spending requirements. Moreover, according to agency officials, SBA provided little timely feedback about the agencies’ methodology reports. By not providing such feedback, SBA is forgoing the opportunity to assist agencies in correctly calculating their program spending requirements and helping to ensure that they meet mandated requirements. In addition, for the participating agencies’ annual report requirement, SBA has provided a template requesting the extramural R&D budget base that agencies used to calculate the programs’ spending requirements, but the template does not request the specific calculations agencies used to derive those requirements. By not requesting such calculations, SBA has been receiving inconsistent and incomplete information from the participating agencies, which limits the usefulness of data it reports to Congress. SBA likewise does not request that agencies include information in their annual reports that would enable better oversight, including information on (1) whether agencies met the mandated spending requirements, (2) the reasons for any noncompliance with these requirements, and (3) the agencies’ plans for meeting any noncompliance in future years. Finally, SBA’s annual reports to Congress have been years late or contained little analysis of the methodology reports agencies submitted to describe how they calculated their spending requirements. Without more rigorous oversight by SBA and more timely and detailed reporting on the part of both SBA and participating agencies, it will be difficult for SBA to ensure that intended benefits of these programs are being attained and that Congress receives critical information to oversee these programs. To ensure that participating agencies and SBA comply with spending and reporting requirements for the SBIR and STTR programs, we recommend the SBA Administrator take the following four actions: Provide additional guidance on how agencies should calculate spending requirements when agency appropriations are received late in the fiscal year and the format agencies are to include in their methodology reports. Provide timely annual feedback to each agency following submission of its methodology report on whether its method for calculating the extramural R&D budget used as the basis for the SBIR and STTR spending requirements complies with program requirements including an itemization of and an explanation for all exclusions from the basis for the calculations. Direct participating agencies to include in their annual reports the calculation of the final extramural R&D budget used as the basis for their SBIR and STTR spending requirements and, if they did not meet the spending requirements, the reasons why not and how they plan to meet the spending requirements in the future. Provide Congress with a timely annual report that includes a comprehensive analysis of the methodology each agency used for calculating the SBIR and STTR spending requirements, providing a clear basis for SBA’s conclusions about whether these calculations meet program requirements. We provided copies of our draft report to the Secretaries of USDA, Commerce, DOD, Education, DOE, HHS, DHS, and DOT; the Administrators of SBA, EPA, and NASA; and the Director of NSF for review and comment. In response, six of the agencies—USDA, Education, DOE, EPA, NASA, and NSF—stated by e-mail that they had no technical or written comments. Five other agencies—Commerce, DHS, DOD, DOT, and HHS—provided technical comments by e-mail, which we incorporated into the draft report as appropriate. SBA provided technical comments on the draft report and officials of SBA’s Office of Technology said by e-mail through the Office of Congressional and Legislative Affairs that they agree with the findings of the report and will work to implement the recommendations. Specifically, in response to our recommendation to provide additional guidance on how agencies should calculate spending requirements, SBA said it plans a training session for all SBIR and STTR agencies to provide guidance and uniformity in the calculation of extramural budgets. In response to our finding that SBA is not receiving timely methodology reports from the agencies in order to provide feedback, SBA said it has strongly encouraged the agencies to submit their methodologies to SBA in a timely manner. We incorporated SBA’s technical comments into the report as appropriate. We are sending copies of this report to the Secretaries of Agriculture, Commerce, Defense, Education, Energy, Health and Human Services, Homeland Security, and Transportation; the Administrators of the Small Business Administration, the Environmental Protection Agency, and the National Aeronautics and Space Administration; the Director of the National Science Foundation; the appropriate congressional committees; and other interested parties. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. Figures 2 through 12 compare reported spending requirements of the 11 agencies participating in the Small Business Innovation Research (SBIR) program with their reported spending over fiscal years 2006 to 2011. Figures 13 through 17 compare reported spending requirements of the 5 agencies participating in the Small Business Technology Transfer (STTR) The program with their reported spending over fiscal years 2006 to 2011.Department of Homeland Security also had an STTR program in FY 2006 and 2007. However, the agency was not required to participate, because its extramural R&D budget for each of those years was actually below the $1 billion threshold required for participation in the STTR program. The agency stated it had inadvertently used an incorrect extramural budget amount to determine its participation requirement. Since the agency was not required to participate in STTR and therefore had no spending requirement, no figure is included here for its STTR expenditures. To calculate the expenditure requirements for the SBIR and STTR programs, we used two key variables: (1) the “base,” which is the research and development (R&D) funding from which the requirement is calculated, and (2) the percentage that is applied to that base. For example, for fiscal years 2006 to 2011, the base for SBIR and STTR funding is the extramural R&D budget, and the mandated percentages applied to that base were 2.5 and 0.3 percent, respectively. We tested three alternative scenarios that vary the percentage applied to the total R&D budget to illustrate the potential effects of changing the methodology to calculate agencies’ SBIR and STTR expenditure requirements. The scenarios analyzed were as follows: For scenario 1, we applied the same percentages for the expenditure requirements under the current law to the total R&D budget instead of the extramural R&D budget. For scenario 2, using fiscal year 2006 numbers as our base, we determined the percentage to apply to the total R&D budget of all participating agencies for fiscal years 2006 through 2011 that would hold the total expenditure requirement constant for the programs. For scenario 3, using fiscal year 2006 numbers as our base, we determined the percentage to apply to the total R&D budget of each individual agency for fiscal years 2006 through 2011 to hold each individual agency’s expenditure requirement constant for the programs. See the details of the three scenarios and current law in table 5, and their effects on spending requirements are in table 6. The following describes the administrative cost data obtained, identified, or not available from the participating agencies for fiscal year 2011. Department of Commerce (Commerce): Officials with the two participating subunits, the National Institute of Standards and Technology (NIST) and the National Oceanic and Atmospheric Administration (NOAA), said that the administrative costs for the program included salaries and expenses but that they did not have an estimate of them. The officials said the agencies did not specifically track administrative costs. Such costs were not allowed to be charged against the SBIR funds. Department of Homeland Security (DHS): Officials in DHS’ Science and Technology Directorate, which is one of two subunits managing the SBIR program at DHS, provided a partial list of administrative costs for the fiscal year. These included salary, travel, and other costs (e.g. contracting fees, support contracts and audit costs) that were estimated at $962,000. Categories of costs that the agency identified but did not estimate included salaries and expenses of other DHS supporting staff and contractors. DHS officials in the Science and Technology Directorate said that the directorate’s management and administrative budget began fully funding the administrative costs for SBIR in 2011; previously, these costs were funded from the extramural R&D budget of the directorate. The other DHS unit with an SBIR program, the Domestic Nuclear Detection Office did not identify or estimate administrative costs. DOD: Agency officials said that while DOD had not tracked administrative costs of the SBIR or STTR programs through 2011 agency-wide, such costs had been reported to various extents by the 13 DOD subunits that participate in one or both of the programs. Based on reports from some subunits, DOD’s partial administrative costs totaled $30.2 million. The 13 DOD subunits varied in their identification of administrative costs. Some identified none; some identified a few; and others identified many categories but did not provide estimates for each cost category. DOD Office of Small Business Programs officials stated that the department did not track “non-legislated administrative expenses,” which were described as the program administrative costs before the start of the administrative costs pilot program. Department of Energy (DOE): Agency officials in the DOE Office of Science, which manages most of the SBIR and STTR programs in DOE, said administrative costs for these programs were in three categories: salaries and benefits, support contracts, and travel, and totaled $1.2 million in fiscal year 2011. According to these officials, these costs did not include personnel expenses for over 70 specialists who spend a fraction of their time on the programs. Department of Transportation (DOT): Agency officials said administrative costs in fiscal year 2011 were estimated at $363,000, primarily for salaries and expenses but also including travel and other smaller categories. These program managers said this represents part of its administrative costs that directly support the SBIR program’s management but not other support activities like procurement and legal services that are provided by other DOT subunits. Department of Education (Education): Agency officials estimated administrative costs for fiscal year 2011 totaled $479,000, of which $4,000 was for travel, $174,000 for salaries and benefits of department employees who administer the program, including preparing solicitations, running competitions, performing oversight, congressional reporting, and monitoring awards. Officials said the 2011 total included $38,000 for salary and benefits of the department contracts and acquisitions management staff, and $263,000 for salary and benefits of personnel assisting with application reviews. Environmental Protection Agency (EPA): Agency officials said that known administrative costs for fiscal year 2011 were $953,000, which included $533,000 for the salaries and expenses of four FTE staff that run the program, $350,000 for external peer review of SBIR proposals at various funding phases, and $70,000 for a contract to provide program support. Officials said there were other administrative costs that were associated with staff that also manage other grant programs and that these costs were not easily separated by program and they were not tracked. Department of Health and Human Services (HHS): Agency officials said they do not track or report the amount of administrative costs for the SBIR and STTR programs. They said it would be very difficult to determine how much is currently spent on the administrative costs of the program. HHS officials reported that since authorizing legislation did not allow SBIR/STTR funds to be spent on administration, funding for administrative costs, such as salary and expenses, training and travel, comes from other accounts. National Aeronautics and Space Administration (NASA): Agency officials estimated administrative costs for certain categories as roughly $11.9 million in fiscal year 2011, which included, among other things, $8.6 million for procurement costs, about $3 million for salaries and expenses, and $151,000 for travel. According to these officials, other identified unestimated and untracked costs to administer the SBIR and STTR programs include the costs of technical experts within NASA reviewing proposals, the cost of holding review panels, and the cost that technical and contracting representatives spend interacting with companies seeking and receiving funding. National Science Foundation (NSF): Agency officials said they identified administrative costs of $4 million for the SBIR and STTR programs. These include 10 FTE within the agency costing approximately $2 million in salaries and benefits and $2 million NSF designates from its extramural research and spends for SBIR and STTR administrative costs, primarily for contracted technical and administrative support. NSF has contracted for this support for many years because of the high volume of actions in the program and the time frames that need to be met in the process. NSF officials said there were other administrative costs, including the efforts of federal staff that devote substantial time to the programs, but these have not been tracked or estimated. U.S. Department of Agriculture (USDA): Agency officials said that in fiscal year 2011 administrative costs for SBIR included $184,000 for experts who provided peer review of project proposals to cover such costs as honoraria and travel. Officials said USDA does not break out administrative costs for the SBIR program beyond honoraria and travel. In addition to the individual named above, Tim Minelli, Assistant Director; Hilary Benedict; Antoinette Capaccio; Cindy Gilbert; Richard Johnson; Cynthia Norris; John Scott; Ilga Semeiks; and Vasiliki Theodoropoulos made key contributions to this report.
The Small Business Act established the SBIR and STTR programs to use small businesses to meet federal R&D needs. The law mandates that agencies, with extramural R&D budgets that meet the thresholds for participation, must spend a percentage of these annual budgets on the SBIR and STTR programs. The agencies are to report on their activities to SBA and, in turn, SBA is to report to Congress. Eleven agencies participate in SBIR, and five of them also participate in STTR. The act's 2011 reauthorization mandates that GAO review SBA's and the agencies' compliance with spending and reporting requirements, and other program aspects, for fiscal years 2006 to 2011. GAO determined (1) the extent to which participating agencies complied with spending requirements and how the agencies calculated these requirements, (2) the extent to which participating agencies and SBA complied with certain reporting requirements, (3) the potential effects of basing the spending requirements on an agency's total R&D budget, and (4) the cost to participating agencies of SBIR and STTR program administration. GAO reviewed agency calculations of spending requirements and the required reports and interviewed SBA and participating agency program and financial officials. Using data agencies had reported to the Small Business Administration (SBA), GAO found that 8 of the 11 agencies participating in the Small Business Innovation Research (SBIR) program and 4 of the 5 agencies participating in the Small Business Technology Transfer (STTR) program did not consistently comply with spending requirements for fiscal years 2006 to 2011. In calculating their annual spending requirements for these programs, some agencies made improper exclusions from their extramural research and development (R&D) budgets and used differing methodologies. SBA, which oversees the programs, provided guidance in policy directives for agencies on calculating these requirements, but the directives do not provide guidance on calculating the requirements when appropriations are late and spending is delayed, resulting in agencies using differing methodologies. This made it difficult to determine whether agencies' calculations were correct. Without further SBA guidance, agencies will likely continue calculating spending requirements in differing ways. The participating agencies and SBA have not consistently complied with certain program reporting requirements. For example, in their methodology reports to SBA, the agencies submitted different levels of detail on their methodologies, such as the programs excluded from the extramural budget and the reasons for the exclusions. SBA's guidance states that the methodology reports are to itemize each R&D program excluded from the calculation of the agency's extramural budget and explain why a program is excluded but does not specify the format of the methodology reports to ensure consistency. Also, SBA's annual reports to Congress contained limited analysis of the agencies' methodologies, often not including information on particular agencies. Without more guidance to agencies on the formats of their methodology reports and more analysis of the contents of those reports, SBA cannot provide Congress with information on the extent to which agencies are reporting what is required. Further, SBA has not submitted an annual report on these programs for fiscal years 2009 to 2011 but plans to submit the reports to Congress later in 2013--making the data available to Congress on the programs 2 to 4 years late. Potential effects of basing each participating agency's spending requirement on its total R&D budget instead of its extramural R&D budget include an increase in the amount of the spending requirement--for some agencies more than others--depending on how much of the agency's R&D budget is composed of extramural spending. Also, if the thresholds of the spending requirements for participation in the programs did not change, changing the base to an agency's total R&D budget would increase the number of agencies required to participate. The agencies' cost of administering the programs could not be determined because the agencies have not consistently tracked that cost as they are not required to by the authorizing legislation of the programs. Nine of the 11 agencies in SBIR provided GAO with estimates of some of these costs for fiscal year 2011--most of which were for salaries and expenses. With the start of a pilot program allowing agencies to use up to 3 percent of SBIR program funds for administrative costs in 2013, SBA plans to require agencies to track and report administrative costs paid from program funds. GAO recommends, among other things, that SBA provide additional guidance to agencies for spending and reporting requirements and provide Congress with a more timely annual report with more analysis of the agencies' methodologies. SBA stated that it agrees with the recom-mendations and will implement them.
Under the pilot program, a SIB serves essentially as an umbrella under which a variety of innovative finance techniques can be implemented. Much like a bank, a SIB would need equity capital to get started; and equity capital could be provided at least in part through federal highway funds. Once capitalized, the SIB could offer a range of loans and credit options, such as low-interest loans, loan guarantees, or loans requiring repayment of interest-only in early years and delayed repayment of the loan’s principal. For example, through a revolving fund, states could lend money to public or private sponsors of transportation projects; project-based or general revenues (such as tolls or dedicated taxes) could be used to repay loans with interest; and the repayments would replenish the fund so that new loans could be supported. Alternatively, states could use federal capital as a reserve, or as collateral against which to borrow additional funds, usually by issuing bonds. Pilot states can capitalize a SIB in part by depositing in the bank a maximum of 10 percent of most of their federal highway funds for fiscal years 1996-97. States not participating in the pilot program differ in their interest in SIBs and in their willingness and/or ability to use the full range of SIB financing techniques. Eleven of the 15 states we surveyed indicated that they were definitely or probably interested in participating in the SIB Pilot Program. However, only 9 of the 15 states submitted SIB applications to DOT. Four of the states—Arkansas, Louisiana, Montana, and New York—indicated that they were probably or definitely not interested in participating in the pilot program. Because we primarily targeted states that had expressed an interest in innovative financing to DOT, survey respondents indicated a higher interest than would be expected nationwide. Nationwide, only 15 states submitted applications to DOT to take part in the pilot program. While six other states expressed interest in the program to DOT, they did not submit an application. On April 4, 1996, DOT announced that Arizona, Florida, Ohio, Oklahoma, Oregon, South Carolina, Texas, and Virginia had been selected to participate in the pilot program. On June 21, 1996, DOT added California and Missouri. Figure 1 shows the applicant states and those selected to participate in the pilot program. DOT will assess how state SIBs are operating under the pilot program. Specifically, the legislation establishing the pilot program directs DOT to report on the financial condition of each infrastructure bank established under the pilot program. This report is to be transmitted to the Congress by March 1, 1997. Appendix III provides you with information on projects that the pilot participants are considering for financial assistance from SIBs. According to the Federal Highway Administration (FHWA) official responsible for the pilot program, the states are in the process of establishing and capitalizing their SIBs; thus, they have not yet decided on the projects that the SIBs will finance. As figure 1 indicates, more than half of the SIB Pilot Program applicants are southern and western coastal states with large and/or growing populations that necessitate additional highway construction. States with large land areas that have comparatively small populations and most northeastern states generally elected not to apply for a variety of reasons. These reasons might include the states’ and regions’ fiscal capacity, the public’s unwillingness to incur debt to finance highways, and the availability and cost of rights-of-way for start-up projects. In connection with DOT’s fiscal year 1997 appropriation, the administration proposed expanding the SIB Pilot Program to include additional states and to provide $250 million in highway trust fund revenue for capitalizing the banks. The House of Representatives rejected the administration’s proposal on the grounds that the pilot program is still in its very beginning stages and that any further expansion of the program should be considered in the context of the reauthorization of the Intermodal Surface Transportation Efficiency Act of 1991 (ISTEA). The Senate provided $250 million for the SIB Pilot Program and allowed the Secretary of Transportation to distribute SIB funds to more than 10 states on the grounds that SIBs are a promising way of facilitating needed infrastructure investment, especially when all levels of government are facing constrained resources. The conferees agreed to provide $150 million for the SIB Pilot Program, which is to remain available until expended, out of the general fund rather than the Highway Trust Fund. In addition, no distribution of funds is to be made until 180 days from the date of enactment. The conferees also agreed to permit the Secretary of Transportation to approve SIBs for more than 10 states. The President signed the legislation on September 30, 1996. Ten surveyed states provided us with estimates of the extent that their needs may be served by a SIB. Eight states indicated that they would use SIBs to help fund less than 10 percent of their transportation projects. Two of the states indicated a higher expected use of SIBs: Ohio estimated 10 to 25 percent of its projects could be financed through a SIB, and Michigan estimated that 25 to 50 percent of its projects could be financed through a SIB. Seven surveyed states expressing interest in creating a SIB indicated that they would probably use the funding for direct loans. Six states indicated that they would probably use the funding for reserves for bonds or loans. The states’ responses are shown in figure 2. In discussing their views, the 11 responding states seemed open to using a variety of financing tools as part of their SIB. For example, 6 of the 11 states that answered this question told us that their SIB would probably use more than one financing tool, and only 2 states said that they probably would not use a particular tool. Michigan and California, for example, said that they would probably use some combination of all the tools. Furthermore, Michigan and Ohio indicated that their SIBs would probably use other finance tools, such as letters of credit, in addition to those listed in figure 2. The SIB concept is intended to complement traditional funding programs and provide states with increased flexibility to offer many types of financial assistance tailored to fit a project’s specific needs. As a result, projects could be completed more quickly, some projects could be built that would otherwise be delayed or infeasible if conventional federal grants were used, and private investment in transportation could be increased. Furthermore, a longer-term anticipated benefit is that repaid SIB loans can be “recycled” as a source of funds for future transportation projects. Thus projects with potential revenue streams will be needed to make a SIB viable. Yet this could also serve as a drawback, and some state and industry officials question whether a sufficient number of revenue-generating projects can sustain a SIB and whether debt financing will prove acceptable to state and local politicians as well as the general public. Traditional federal transportation funding programs generally consist of grants, where the federal share of a project’s cost is set, usually at 80 percent, and the state pays the remaining 20 percent. Until recently, states have generally not been able to tailor federal funding to a form other than a grant. Under the pilot program, a SIB is essentially an umbrella under which a variety of innovative financing techniques could be implemented. Much like a bank, a SIB would need equity capital to get started. This capital could come partially from federal funds. Once capitalized, the SIB could offer a range of loans and credit options. For example, through a revolving fund, states could lend money to public or private sponsors of transportation projects. Although new for federal transportation projects, revolving funds have been used for other infrastructure investment, such as wastewater treatment facilities required by the Environmental Protection Agency (EPA). EPA’s state revolving funds are structured in two different ways and can be used to illustrate how a transportation SIB might be set up. The first model is a basic revolving loan fund. Under this model, a state SIB would lend capital directly to projects; project-based revenues (such as tolls or dedicated taxes) would be used to repay loans with interest. The repayments would replenish the fund so that a new generation of loans could be made. The second model is a leveraged revolving fund. In this instance, states would use federal capital as reserves or collateral against which to borrow additional funds, usually by issuing bonds. The SIB would pay interest on the bonds but would in turn lend out the bond proceeds to individual projects. With this type of model, leveraging would increase the pool of capital available to support project loans. Furthermore, like the basic revolving fund, repayment of project loans plus interest would support the SIB’s repayment of its bonds as well as provide funds for the SIB to loan to future projects. For example, Ohio plans to initially capitalize a SIB with $65.5 million, and issue $87 million in revenue bonds. As a result, the SIB could loan out a total of $152 million to projects. SIB funds could also be used to provide credit enhancements for transportation projects. Credit enhancements, such as loan guarantees or bond insurance, provide additional security to commercial lenders or private investors who may be providing funds as part of an overall financing package. Credit enhancements can also result in lower interest costs or greater borrowing power for a project. Some states view SIBs as complementary to their existing innovative financing efforts. For instance, Ohio’s SIB application notes that as a result of numerous funding requests coming from the state transportation department’s long-range multimodal transportation program, state law was modified to allow the state’s Director of Transportation to make loans to agencies, organizations, and persons to acquire, develop, and/or construct transportation facilities. The law also authorized the director to deposit payments from such loans into a revolving fund for subsequent loans. While this fund is not identified as a SIB, Ohio’s SIB application notes that essentially it is one, because the ability to make loans and receive payments is the basic underlying tenet of a SIB. Similarly, Arizona’s SIB application notes that one of the state’s key fiscal strategies has been to accelerate highway construction through the issuance of $3.1 billion in state transportation bonds. Arizona’s SIB application stated that the SIB will build on the state agencies’ recognized strengths in the bond-financing area, where there is a proven track record in accessing capital markets and maintaining high credit quality for bonds issued. As shown in figure 3, officials from eight states we contacted said that the most important benefit of SIBs over the next 5 years is the expedited completion of the projects. By drawing on diverse sources for funds, more capital can be amassed, thus enabling a project to get started and completed sooner than otherwise possible. For instance, Arizona’s SIB application listed five potential projects for SIB financing. With SIB financing, the state estimated that four of the projects could get under way in fiscal year 1997, rather than fiscal years 1999 through 2004 and that the fifth potential project, although not scheduled, may be able to get under way in fiscal year 1997 with SIB assistance. Some states also told us that in addition to completing individual projects faster, a SIB may provide the flexibility to complete a financial package for worthwhile projects that may be lower on the state’s priority list because of their cost, demographic reasons, or political changes in priorities. For example, a major new road may simply be too costly to build, given that many small competing projects could be built with the same state funding. But if the project is financed in part from other sources, such as a local community and private investors, less state funds are needed, which in turn, may permit a state to fund more roads on its priority list. As the Texas SIB application notes, over the next 5 years, the state will be able to finance less than half of its identified transportation needs with currently available funding. The availability of SIB financial assistance will allow local communities to provide assistance and help bridge the funding gap. Communities that are willing to dedicate local revenue sources to complete particular projects but do not have well-established credit ratings or lack experience in capital financing will be aided by financial assistance from SIBs and associated technical assistance. Ohio plans to foster increased local contributions. Specifically, Ohio notes that its SIB will be reinforced by a project-rating system that identifies priorities for the selection of projects. Under this rating system, local communities can receive bonus points that upgrade the priority of their projects if they provide a significant portion of the project’s funding. Ten of the states we surveyed viewed SIBs’ ability to attract private funds as providing some or great benefit. Private investment has not traditionally been involved in transportation projects because of the general lack of authority under federal law and because of some states’ legislative and constitutional restrictions on giving or lending state funds to private entities to build and operate roads. A SIB may increase private investment by reducing the risk to the private investors. Credit enhancements, such as a loan guarantee, would help to ensure that federal and/or state funds committed to the project will be there when the bills come due. Members of the infrastructure finance community told us that one common fear among investors is that the political commitment and funds planned for a given project will not materialize because of competing state priorities. Even a relatively small government investment could increase the private sector’s confidence. For example, California officials believe that state SIB investments of only 10 percent equity in some projects will give private lenders and investors the confidence to participate in funding the remaining 90 percent of the cost. Private investment can help close the gap for transportation needs that may otherwise go unmet or be forestalled for years. For instance, Oklahoma’s SIB application explained that there are a number of growth industries in the state, all of which require enhanced transportation. For example, the southeast quadrant, the state’s poorest quadrant, supports a growing food-processing industry and is experiencing an influx of hog farms, feed plants, and poultry-processing facilities. But further industry development depends on substantial improvements to the rural transportation network. State officials view a SIB as a vehicle to help facilitate private investment from businesses that would benefit from an improved transportation network. Looking toward the future, states that create revolving funds want the SIBs to be self-sustaining, and if the funds are leveraged, they would want the pool of resources available for loans to grow. However, this growth may take many years. Whether and when a SIB achieves growth depends on a number of factors, including (1) the degree to which loan interest rates are lower than market rates, (2) loan repayment periods, (3) the reliability of forecasted revenue streams, and (4) the amount of leverage employed. And not all SIBs will leverage funds. Only 18 states have leveraged funds under EPA’s State Revolving Fund Program. In the State Revolving Fund context, leveraging means that states have the discretion to use the federal capital grants, as well as their matching shares, as collateral to borrow in the public bond market to increase the pool of available loan funds for projects. According to the Council of Infrastructure Financing Authorities, leveraging the State Revolving Fund has substantially increased the funds available for lending. The Council reported in August 1994 that close to $4 billion has been added to the loan pool by the 18 states that have leveraged their funds—half as much as the nearly $8 billion provided in federal capital grants thus far. Furthermore, when assessing the future growth for those funds that are leveraged, the Council assumes conservatively that $1 for the State Revolving Fund program will generate an additional $2 in investments. Arizona’s plans are an example of how a SIB could grow. The state plans to capitalize an initial SIB at $71.5 million, representing $64 million in federal funds and $7.5 million in state and/or local funds. The state plans to use that investment as a base for issuing bonds and make $20 million in initial loans to transportation projects with the bond proceeds. In approximately 20 years (by 2017), the state anticipates that loan repayments plus interest on the loans will increase its initial $71.5 million investment to $260 million in SIB loans. This amount in turn could be the basis for supporting an even larger bond issuance if the state decided to leverage its funds again. DOT estimated that $2 billion in federal capital provided through SIBs could be expected to attract an additional $4 billion for transportation investments, thus achieving a leverage ratio of 2 to 1. FHWA officials told us that this estimate is conservative and is based on EPA’s State Revolving Fund program. FHWA officials said that SIBs could achieve a leverage ratio as high as 4 to 1. But as Washington State officials point out, FHWA’s assertion is too general to prove or disprove. The return depends heavily upon individual projects and how “leverage” is defined. Some state officials and industry experts remain skeptical that SIBs will produce the expected benefits. Some of the barriers cited include the following: (1) there are no additional federal funds to support SIB capitalization, (2) there are not enough revenue producing projects to sustain a SIB, and (3) there may be legal or constitutional state problems, such as prohibitions against the private sector’s profiting from using government funds channeled through a SIB. Figure 4 shows states’ responses to possible barriers to their participation in the pilot program. As figure 4 shows, states considered the lack of additional federal funds as the primary barrier to participating in the program. However, very few states considered their insufficient knowledge of SIBs or lack of expertise to start a SIB as barriers to participating in the SIB Pilot Program. States selected to participate in the pilot program are permitted to use a maximum of 10 percent of most of their federal highway grant funds for fiscal years 1996-97 to capitalize a SIB. Funding SIBs from existing funds, however, can act as a disincentive for states participating in the SIB Pilot Program. As figure 4 showed, 8 of the 15 states cited the absence of additional federal funds to capitalize a SIB as a factor that definitely diminished their likelihood of participating in the SIB Pilot Program. For instance, New York transportation officials told us that all their available federal and state funds are fully committed to planned highway and transit projects; thus, no funds are available to capitalize a SIB. Of the 11 states we surveyed that indicated interest in participating in the SIB Pilot Program, 9 provided us with estimates of the percentage of their available federal highway funds they expected to use to capitalize a SIB. Six of these states indicated that for fiscal years 1996 and 1997, they expected to use less than half of the federal highway funds allowed to capitalize a SIB. Some of the states’ decisions reflect the fact that federal funds are already fully committed to planned projects, often for the next 3 to 5 years. Therefore, state officials do not expect to be able to rechannel funds for an alternative use, particularly in the early start-up years. According to a Texas transportation official, capitalizing a SIB within the next 5 years would mean diverting funds from planned projects with existing constituencies. This official was more optimistic that with the passage of time, rechanneling federal funds to a SIB would become easier as projects that could be supported through a SIB developed their own constituencies. To help with capitalization for SIBs in a constrained budget environment, some projects already planned with established financing may be brought under the SIB financing umbrella. Thereby, the SIB will be able to capture future project loan repayments. For instance, one of four potential projects identified in South Carolina’s SIB application will receive financing through a planned issue of up to $60 million in state highway bonds. The proceeds of this bond issue will be loaned to the state turnpike authority to complete construction of a four-lane highway that will bypass the overcrowded main artery on Hilton Head Island. Under the terms of a loan agreement, tolls collected by the turnpike authority from the project will be used to repay the state DOT. It is the intention of the state DOT to move this transaction under the SIB. Similarly, one of the projects identified in the Texas SIB application already has financing, but the Texas DOT indicated its intent to bring the project under the institutional framework of the SIB, thus allowing loan repayments to be used for future SIB-assisted projects. If this is the only source of the SIB’s capitalization, however, the operation of the Texas SIB will be delayed because repayment of the $135 million loan does not begin until 2004 and is spread over 25 years. A provision in DOT’s fiscal year 1997 appropriation should also help with capitalization for SIBs. As previously mentioned, the appropriation provides $150 million for the SIB Pilot Program. The funding is to be made available until expended. DOT will need to decide how the funds will be allocated. DOT will have various options for allocating the funds, including (1) a proportional distribution based on states’ historical share of federal highway funds for those states participating in the pilot program, (2) an equal distribution of the funds to all participating states, (3) an incentive to induce states to participate in the SIB pilot, or (4) a performance award to encourage certain actions or projects, such as fund leverage or particularly innovative project financing. While these are just some of the various ways that funds could be distributed, information on how the funds will be distributed will likely prove to be a critical factor in the number of additional states that choose to participate in the pilot program. According to an official in FHWA’s Office of Policy, a significant barrier to viable, thriving SIBs is the low number of projects that could generate revenue and thus repay loans made by SIBs. In turn, the states’ and regions’ population density and fiscal capacity, the acceptance of tolls by the public and legislators, and the availability and cost of the rights-of-way for start-up projects are factors in how much demand there will be for SIB-financed projects. Six of the states that we surveyed told us that an insufficient number of projects with a potential revenue stream would diminish the prospects that their state would participate in the SIB Pilot Program. Repayments for highway projects’ debt could be derived through a number of ways; principal ones would include (1) vehicle tolls; (2) other project revenues, such as air or other rights of way, and revenues from commercial rest stops; (3) dedicated public revenues linked to the project, such as revenue districts or special benefit taxes, and general public revenues, such as development or sales taxes. Figure 5 shows the types of revenues that states indicated they would likely use to repay SIB loans. Ten of 11 states said they are considering tolls. However, state officials commented that they expected tolls would generate considerable negative reaction from political officials and the general public. This concern has been highlighted by a recent experience in Washington State, where four of five planned toll projects have been indefinitely suspended because of public and political opposition. In addition, of the four states we surveyed that were not interested in participating in the SIB Pilot Program, three states cited the need to repay SIB debt, specifically, an aversion to tolls, as a reason for not wanting to participate. As Arkansas officials noted, the public aversion to debt financing for highways was recently expressed when a state bond referendum lost heavily; 87 percent voted against it. Some states also expressed uncertainties regarding their legal or constitutional authority to establish a SIB in their state or use some financing options that would involve the private sector. Michigan, for instance, said that it does not currently have the constitutional authority to lend money to the private sector. While Minnesota does have the authority to lend money to the private sector, state officials noted that they would need legislative changes, because their authority is currently restricted to lending funds interest-free to private firms to build toll roads. Thus, the state would need the legislative authority to charge interest on loans to the private sector. In addition, Minnesota officials stated that the SIB would need authority to reloan the money because any repayment of a transportation loan must currently be deposited into the state’s general fund. Texas officials noted that participation in the SIB Pilot Program would be based on a two-phased approach. In the first phase of implementation (1996-97), the Texas SIB would use existing statutory and constitutional authority to provide financial assistance for highway toll projects. In January 1997, legislative changes would be sought to enable the Texas SIB to begin the second phase of the program’s implementation and expand the types of recipients and projects eligible for assistance. Another impediment can arise if the SIB exposes the state to debt. Backing SIB financial assistance with the full faith and credit of the state is not legally permitted in some states. Without the guarantee of the full faith and credit of the state, the SIBs will have to rely on the strength of their project portfolio and initial capitalization as the basis for borrowing. For instance, South Carolina officials noted that the state constitution prohibits the outright guarantee of the full faith and credit of the state for the indebtedness of a private party. In addition, South Carolina officials note that any security or debt financing instrument or guarantee issued by their state SIB is not and should not be construed to be backed by the full faith and credit of the state of South Carolina or its agencies and does not constitute a commitment, guarantee, or obligation of the state. However, these officials do not believe that this prohibition will significantly affect the operations of a SIB because proposed legislation will limit the SIB’s obligations to exclude the full faith and credit of the state. Similarly, Oregon’s Department of Justice advised that Oregon’s constitution prohibits lending the credit of the state. Therefore, SIB agreements will be structured to protect the state from assuming any prohibited obligations. Finally, some infrastructure finance experts question SIBs’ prospects for attracting private sector involvement—one of the program’s primary goals. One principal barrier to attracting private capital is the fact that the Internal Revenue Code restricts private involvement in tax-exempt debt. In the case of state and local bonds, bondholders’ interest earnings are exempt from federal taxes. However, the tax exemption does not apply to a bond issue if (1) the private sector uses more than 10 percent of the proceeds and finances more than 10 percent of the debt or (2) more than 5 percent of the proceeds or $5 million (whichever is less) is used to make loans to the private sector. Exempt facility bonds that meet volume and other statutory requirements are not subject to this rule. Exempt facility bonds are bonds for which 95 percent or more of the issue’s net proceeds are to be used to provide specified facilities, including airports, docks and wharves, and mass-transit facilities. A number of infrastructure finance experts told us that states that choose to leverage their infrastructure banks will likely do so with tax-exempt debt because bondholders are willing to accept lower interest rates in exchange for the bonds’ tax-exempt status. Restrictions on private involvement in tax-exempt debt are not unique to infrastructure banks. However, as a result of the restrictions, private participation in projects financed by leveraged banks could be inhibited under the terms of existing tax law. SIBs offer the promise of helping to close the gap between transportation needs and available resources by helping to attract other revenue sources. However, some state officials expressed an aversion to debt financing and concern about whether there are enough revenue-generating projects to sustain a SIB. Because of its newness, the pilot program will need time to develop and mature, and a comprehensive assessment of SIBs’ impact on meeting transportation needs can probably only be assessed over the long term. The legislation authorizing the SIB Pilot Program provides that DOT submit a report to the Congress on the financial condition of each infrastructure bank established under the pilot program. This report is to be submitted to the Congress by March 1, 1997. However, because of the start-up time involved in establishing and funding SIBs, the information available on the financial condition of SIBs may be limited at that time. Furthermore, because the Congress only recently approved expanding the SIB Pilot Program to more than 10 states, along with an additional $150 million, it may be too early to comprehensively evaluate the results of the program. Once SIBs begin operating, disseminating information on states’ successes and failures with various financing options as the pilot program progresses could help other states use their SIB more effectively and educate other states on the benefits and uses of a SIB. One of the early benefits in certain pilot states is planned action to remove legislative barriers to private financial involvement in transportation projects. The Congress may wish to consider postponing the due date for DOT’s report on the financial condition of the SIBs in the pilot program to a date later than March 1, 1997. We provided DOT with draft copies of this report for DOT’s review and comment. We met with DOT officials—including representatives from FHWA’s Office of Chief Counsel and Office of Fiscal Services, the Federal Transit Administration’s Office of Budget and Policy, and the Office of the Secretary Office of Economics—who agreed with the information presented throughout the report and considered it a well-prepared, balanced report. DOT agreed with our matter for congressional consideration and thought that a postponement of DOT’s due date for reporting on the financial condition of SIBs to a date later than March 1, 1997, would allow the program time to develop and enable DOT to provide a more useful, substantive report. Regarding legal barriers to SIBs, officials from FHWA observed that states may be able to create SIBs under existing law. However, some states may have to overcome specific legal restrictions for their SIBs to engage in the full array of financing activities that can be used to address transportation needs. We performed our review from August 1995 through September 1996 in accordance with generally accepted government auditing standards. Please call me at (202) 512-2834 if you or your staff have any questions. Major contributors to this report are listed in appendix IV. The National Highway System Designation Act of 1995, which includes the authorization for a State Infrastructure Bank (SIB) Pilot Program, also gives states additional flexibility to use innovative finance tools for highways outside the SIB Pilot Program. This legislation as well as other statutes contain provisions related to the following: Advance Construction: Allows a state to begin a federal-aid eligible project in its transportation plan with its own funds before accumulating the full federal funds. Use of Federal Funds to Finance Bond and Other Debt Instruments: The Secretary of Transportation may reimburse a state for expenses and costs incurred for interest payments, the retirement of principal, the cost of issuance, or other costs of issuing bonds to finance highways. Loans of Federal Highway Funds to a Public or Private Entity With a Federal Share Increased for Toll Roads: The federal share payable for construction of a toll road is increased from 50 to 80 percent. Increased Flexibility Provided for State Match: States may apply the value of donated funds, materials, or services to eligible projects against the state match. In a survey, we asked 15 states how much use, if any, their state would likely make of the above financing tools in the next 5 years. As figure I.1 shows, advance construction was the finance tool that most states (8 of 15) believed they would make great use of in the next 5 years. The second favored tool was the flexibility to meet state matching requirements by applying the value of donated funds, materials, or services to eligible projects. In considering what role SIBs may play in helping states to expand their ability to finance highways, the objectives of our review were to (1) identify the extent of states’ interest in the pilot program and how states might use SIBs and (2) identify the benefits and barriers to states’ using SIBs. At the request of the Senate Committee on Environment and Public Works and the Chairman of that Committee’s Subcommittee on Transportation and Infrastructure, we also briefly summarize information on states’ interest in using other innovative financing mechanisms that are contained primarily in the National Highway System Designation Act of 1995 in appendix I. To attain these objectives, we reviewed relevant sections of the Intermodal Surface Transportation Efficiency Act of 1991 (ISTEA), the National Highway System (NHS) Designation Act of 1995, and the Department of Transportation’s (DOT) Test and Evaluation Pilot Project. We reviewed the notice inviting states to apply for the pilot program, the application instructions, and application material submitted by individual states. We selected states for interviews prior to learning whether they applied and were selected to participate in the program. We were interested in obtaining the views of states that wanted to apply for participation in the pilot program as well as states that were not interested. We contacted transportation officials from 16 states and were able to obtain information from 15 states on their views, expectations, and plans (if any) to use SIBs, as well as their expectations on using certain other innovative finance tools. We conducted a telephone survey with the selected states and collected documentation from the surveyed states and from the Federal Highway Administration (FHWA) about states’ SIB plans. The 15 states that provided us with information were Arkansas, California, Florida, Louisiana, Maryland, Michigan, Minnesota, Montana, New Jersey, New York, Ohio, South Carolina, Texas, Virginia, and Washington. These states were judgmentally selected to include states with interest in innovative finance tools and geographical balance. Of the 15 states, 6 applied and were selected, 6 did not apply, and 3 applied for but were not selected to participate in the SIB Pilot Program. We reviewed states’ SIB documents and analyzed the results of surveys and interviews with state DOTs to identify common problems with current loan provisions, potential problems with the SIB concept, and states’ interest in and uses for SIBs. Furthermore, we identified major barriers that may prevent SIB benefits from being realized. We also conducted telephone interviews and follow-up interviews with state DOTs’ planning, policy, and finance officials; FHWA officials responsible for innovative finance initiatives; representatives from finance and construction firms; experts from academia, consulting firms, and debt-rating services; and representatives of national policy and labor organizations. We conducted our review from August 1995 through September 1996 in accordance with generally accepted government auditing standards. $ 2.2 million Loan amount to be determined. Alternatives are under consideration. Design and right-of-way acquisition are under way, and environmental impact statement is approved. Mid-1997 (without SIB assistance 2004). $16.0 million Loan amount to be determined. Alternatives are under consideration. Environmental assessment and preliminary design under way. Mid-1997 (without SIB assistance 1998). $21.9 million Loan amount to be determined. Alternatives are under consideration. Environmental assessment is under way; design 30- percent complete. Late 1997 (without SIB assistance). $14.9 million Loan amount to be determined. Alternatives are under consideration. Environmental assessment is almost complete; final plans expected before 1997. 1997 (without SIB assistance, 1998). $12.0 million Loan: $6 million. Most likely revenue stream: surcharge on raceway admission. Environmental assessment and design are complete. 1997 (without SIB, currently not scheduled). enhancement to support privately issued revenue bonds. Debt service on SIB-supported bonds to be paid through cargo fees to shippers. Resolution of the final environmental impact statement expected in 1996. Not determined. short-term commercial loan. Preferred alternative selected in 1995. Target completion date: 2000 $713.0 million $25 million line of credit to replace existing contingency fund. If accessed, the line of credit would be repaid through excess toll revenues. Portions of project are now under construction. Target completion date of some parts: 1999. (continued) Conceptual design and engineering are in progress. enhancements to assist private developer secure a $25 million to $35 million commercial loan. Operating income to repay SIB-supported loans. Fees on loans and guarantee to be repaid by ground lease and parking revenues. $746.0 million Loan: $15 million. SIB loan to be repaid from bond issue. Segments are under construction; environmental impact statement in progress for remainder of project. Partial completion dates: 1999 and beyond. Costs to vary by site. Credit enhancements and loan guarantees to assist private developers to secure financing. Profits earned by private developers. Initiative is in conceptual stages. Not determined. Not determined. Not determined. support financing are the most likely forms of assistance. Not determined; economic feasibility study completed in March 1996. $30.0 million for new project (total project costs of $817.0 million). Credit enhancements to support an additional bond issue. Debt service on new bond issue to be repaid with excess toll revenues or other funds. Construction began in 1993. Target completion date, 1997 $210.0 million If pursued as public-private partnership, credit enhancement to assist private consortium in obtaining financing. If pursued as a public-private partnership, tolls could be used to repay loans and fees for loan guarantees. Initial feasibility study has been completed; further progress dependent on funding. Not determined. (continued) Credit enhancement to support privately issued debt. Tolls are the most likely revenue source. Draft environmental impact statement submitted in 1996. $22.0 million Loan: $7 million. Revenue from a mix of project and systemwide toll receipts and state transportation funds. Preliminary design and environmental study complete. October 1998. Mid-1998. million. Revenue from a mix of project and systemwide toll receipts and state transportation funds. Final engineering is nearly complete. $210.0 million plus. Not determined. Not determined. Not determined. Not determined. Not determined. to public agency; loan to private sector partner not determined. Most likely source of revenue: Local tax revenues and parking fees. Preliminary design and feasibility analysis completed. Not determined. airport parking fees, concession fees, and public-private joint development projects. Feasibility study is under way; state plans to solicit proposals in 1996 for private equity partners. Not determined. Not determined. enhancement to support bonds issued by the city or county. Final environmental impact statement completed in 1996. Not determined. million. Local transportation sales tax revenues. Purchase commitment to follow financial plan. (continued) Not determined. enhancement to support bonds issued by Bi-State Development Authority. Sales taxes and parking fees would support the bond issue. First $1 million phase of project is under way; second $8 million phase is awaiting SIB funding. $23.3 million Support for pooled bond issuance for all five projects. Most likely revenue source: Local tax revenues. Progress depends on identifying funding source. Not determined. $38.6 million plus Not determined. Most likely revenue source: Taxes from a new transportation development district. Project is in preliminary stages. Not determined. Not determined. Not determined. taxes paid to the tax increment financing district. $19.5 million Loan: $7.5 million. Revenues from fees collected at an amusement park parking lot and a 1-percent hotel/motel tax. Environmental impact statement is nearly complete. $118.9 million Loan: $30 million. Loan to be paid by revenue bond issue backed by toll receipts. Environmental and design work to be done from 1996 to 1998. Not determined. $156.2 million Loan: $7 million. Loan to be paid by revenue bond issue in 2003 and backed by the city’s income tax. Design engineering is nearly complete. $3.2 million Loan: $3.2 million. Loan to be paid from a future federal fund allocation. Preliminary engineering and design are in progress. (continued) $7.3 million construction loan. Loan: $7.3 million. $4 million of $7.3 million loan repaid through private loan; $3.3 million balance converted to permanent financing, subordinate to a private loan. Environmental clearance granted; design engineering complete. $7.2 million Loan: $7.2 million. Loan repaid from fees charged to users of intermodal facility. Environmental analysis has begun. $12.0 million construction loan. Loan: $12 million. $9 million of loan repaid through private loan; $3 million balance converted to permanent financing, subordinate to private mortgages on platform and facilities. Feasibility analysis has begun. $10.0 million permanent loan. Loan: $10 million. Likely revenue source: Lease payments from short line railroad. Environmental analysis to be conducted in mid-1996. $10.0 million permanent loan. Loan: $10 million. Loan repaid from building rents and state general funds. Project design is under way. $196.0 million Loan: $30 million for preconstruction costs. Preconstruction loan repayment with proceeds from revenue bonds, with debt service from federal and state funded lease payments. Fallback revenue: State fuel tax or tolls. Environmental analysis complete; project awaits final financing plan. (continued) No data. Not determined. Not determined. Not determined. Not determined. Bond issue will repay short-term loan. Likely source to repay SIB assistance: Tolls. Preliminary engineering, environmental studies, and final design by end of 1997. Bond issue will repay short-term loan. Likely source to repay SIB assistance: Tolls. Preliminary engineering, environmental studies, and final design by end of 1997. Project has been designed and is ready to start. repaid from local improvement district funds. $15.1 million Loan: $2.4 million. SIB loan to be repaid by city’s share of state Transportation Equity Account or other local funds. Most parts of the project have completed final design. SIB loan to be repaid by county road funds. Project ready for final design. enhancement for issuance of $3.6 million in revenue bonds. Principal and interest on the bonds to be paid from county gas tax and the county’s share of state motor vehicle fund revenues. Financing is dependent on vote for local tax to support bond issue. (continued) $160.0 million Loan: Not determined. Loan to be repaid with letter of credit backed by toll receipts. Request for proposals issued in August 1995; agreement expected in 1996. $120.0 million Loan: Not determined. Loan to be repaid with project toll receipts. Request for proposals issued in August 1995. $15.0 million Loan: Not determined. Potential revenue source: Admission tax at Fantasy Harbor entertainment complex. Negotiate a design/build contract in fall 1996. $81.0 million Loan: Not determined. Loan to be repaid from toll receipts. Construction under way. $696.0 million Loan: $135 million loan has already been made. Likely revenue source: Toll receipts. In final stages of preconstruction. Not determined. unspecified amount for feasibility study; future amounts not determined. Feasibility study is under way. Not determined. unspecified amount for feasibility study; future amounts not determined. Feasibility and investment study to begin soon. Not determined. Not determined. Original environmental impact statement completed in 1984; design is under way. Not determined. not determined. (continued) Not determined. not determined. Likely revenue source: Toll receipts. Original environmental impact statement completed in 1989. Not determined. City is evaluating private proposals to build, operate, and finance the project. Not determined. Not determined. Request for proposal is being drafted for private entities to design, construct, operate, and maintain. Not determined. Not determined. Project is receiving $2 million in Federal Transit Administration grants; future funding is uncertain. Not determined. not determined. Jonathan T. Bachman Matthew W. Byer Helen T. Desaulniers David G. Ehrlich Gary L. Jones Yvonne C. Pufahl Miriam A. Roskin Phyllis F. Scheinberg The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed states' interest in establishing state infrastructure banks (SIB), focusing on the: (1) extent of states' interest in the SIB pilot program and how states might use SIB; and (2) benefits and barriers to states using SIB. GAO found that: (1) 15 states applied for the 10 slots in the SIB pilot program; (2) these states generally have large and growing populations that need additional highway construction; (3) most of the states surveyed indicated that SIB would probably be used to help fund less than 10 percent of their state transportation projects in the next 5 years; (4) officials from 8 states believe that the most important benefit of using SIB over the next 5 years would be the expedited completion of state transportation projects; (5) 8 states believe that the absence of new federal funds to capitalize SIB diminished the likelihood that they would participate in the SIB pilot program; (6) the fiscal year 1997 Department of Transportation (DOT) appropriation provided $150 million for SIB, and how the funding is allocated could affect the number of states applying for the pilot program; (7) although a primary SIB benefit is that financing will be repaid and can be recycled to future transportation projects, some states are averse to debt financing and concerned about whether there are enough revenue-generating projects to sustain SIB; (8) some infrastructure financing experts question SIB prospects for attracting private-sector involvement; and (9) states expressed varying degrees of interest in other financing mechanisms provided for primarily in the National Highway System Designation Act of 1995.
In recent years, federal agencies have been making greater use of interagency contracting—a process by which agencies can use another agency’s contracting services or existing contracts already awarded by other agencies to procure many goods and services. An agency can enter into an interagency agreement with a servicing agency and transfer funds to the servicing agency to conduct the acquisition on its behalf, or an agency can order directly from a servicing agency’s contract, such as the GSA schedules or GWACs. When funds are transferred to another agency, the contracting service can be provided through entrepreneurial, fee-for- service organizations, which are government-run but operate like businesses. Interagency contracts are designed to leverage the government’s aggregate buying power and simplify procurement of commonly used goods and services. In this way, the contracts offer the benefits of improved efficiency and timeliness in the procurement process. Determining the value of a particular contracting method includes considering benefits such as timeliness and efficiency as well as cost-- including price and fees. Although interagency contracts can provide the advantages of timeliness and efficiency, use of these types of vehicles can also pose risks if they are not properly managed. GAO designated management of interagency contracting a governmentwide high-risk area in 2005. A number of factors make these types of contracts high risk, including their rapid growth in popularity along with their administration and use by some agencies that have limited expertise with this contracting method, and their contribution to a much more complex procurement environment in which accountability has not always been clearly established. In an interagency contracting arrangement, both the agency that holds, and the agency that makes purchases against, the contract share responsibility for properly managing the use of the contract. However, these shared responsibilities often have not been well-defined. As a result, our work and that of some inspectors general has found cases in which interagency contracting has not been well-managed to ensure that the government was getting good value. For example, in our review of the Department of Defense’s (DOD) use of two franchise funds, we found that the organizations providing these services did not always obtain the full benefits of competitive procedures, did not otherwise ensure fair and reasonable prices, and may have missed opportunities to achieve savings on millions of dollars in purchases. In another review, we found task orders placed by DOD on a GSA schedule contract did not satisfy legal requirements for competition because the work was not within the scope of the underlying contract. Recent inspector general reviews have found similar cases. For example, the Inspector General for the Department of the Interior found that task orders for interrogators and other intelligence services in Iraq were improperly awarded under a GSA schedule contract for information technology services. The Federal Acquisition Regulation (FAR) is the primary regulation governing how most agencies acquire supplies and services with appropriated funds. The regulation provides general guidance for interagency agreements that fall under the authority of the Economy Act and for the GSA schedules and GWACs. The FAR precludes agency acquisition regulations that unnecessarily repeat, paraphrase, or otherwise restate the FAR, limits agency acquisition regulations to those necessary to implement FAR policies and procedures within an agency, and provides for coordination, simplicity, and uniformity in the federal acquisition process. There are several types of interagency contracting. For more information on those included in our review, see appendix II. DHS spends significant and increasing amounts through interagency contracting—a total of $6.5 billion in fiscal year 2005, including $5 billion through interagency agreements and about $1.5 billion by placing orders off other agencies’ contracts (see fig. 1). DHS’ total spending on interagency contracting increased by about 73 percent in just 1 year. DHS was established as of March 1, 2003, by merging the functions of 23 agencies and organizations that specialize in one or more aspects of homeland security. OCPO is responsible for creating departmentwide policies and processes to achieve integration and to manage and oversee the acquisition function but does not have enforcement authority to ensure that initiatives are carried out. There are seven acquisition offices within DHS that pre-date the formation of DHS and continue to operate at the components. OPO was formed with the new department to serve the newly established entities and those components that did not have a separate procurement operation. Of those that pre-date DHS, the Coast Guard and CBP provide different examples of the types of components that formed DHS. The Coast Guard, previously under the Department of Transportation, already had an extensive procurement operation, whereas CBP was created by combining the United States Customs Service, formerly part of the Department of the Treasury, Border Patrol and the inspectional parts of the Immigration and Naturalization Service, and portions of the Department of Agriculture’s Animal Plant and Health Inspection Service. Thus, CBP has been faced with the added challenge of creating a procurement organization to meet its new mission. Our prior work has found that an effective acquisition organization has in place knowledgeable personnel who work together to meet cost, quality, and timeliness goals while adhering to guidelines and standards for federal acquisition. While DHS has developed guidance on the use of interagency agreements—the largest category of interagency contracting at DHS, which amounted to $5 billion in fiscal year 2005—it does not have specific guidance for other types of interagency contracting, including GSA schedules and GWACs, which accounted for almost $1.5 billion in fiscal year 2005. Moreover, we found that some DHS users may have lacked expertise in the proper use of interagency contracts. Although some DHS acquisition officials believe the FAR provides adequate guidance on the use of interagency contracts, such as the GSA schedules, our prior work and inspector general reviews have found numerous cases in which these contracting methods have not been properly used. For example, users have requested work that was not within the scope of the contract and administrators have not ensured fair and reasonable prices. Recognizing this concern, other large agencies, such as DOD and the Department of Energy, have identified the need to carefully manage the use of these contracts and have issued supplemental guidance and emphasized training programs to mitigate these risks. DHS departmentwide acquisition guidance covers interagency agreements but not other types of interagency contracting. In December 2003, DHS issued the Homeland Security Acquisition Regulation and the Homeland Security Acquisition Manual to provide departmentwide acquisition guidance. In addition, DHS issued a departmentwide directive on how to use interagency agreements by which funds are transferred to other agencies to award and administer contracts or to provide contracting services on behalf of DHS. However, as we reported in March 2005, the directive was not being followed for purchases made through these agreements. For example, there was little indication that required analyses of alternatives were performed or that required oversight was in place. Although DHS began revising the directive in fiscal year 2004, the revisions have yet to be issued. According to OCPO officials, its limited policy and oversight resources provide assistance to the components as needed, taking time away from acquisition policy efforts, such as developing guidance. For example, OCPO officials provided contracting assistance to the Federal Emergency Management Agency in the response to Hurricanes Katrina and Rita. To supplement departmentwide DHS guidance on interagency agreements, each of the components we reviewed has issued some implementing guidance. OPO issued guidance addressing the appropriate use of interagency agreements that requires program officials and contracting officers to research other available contract vehicles. In contrast, CBP guidance addresses the goals of an analysis of alternatives, but emphasizes the process and the documentation necessary to execute the interagency agreement. The Coast Guard’s supplemental guidance focuses mainly on the ordering and billing procedures for interagency agreements. However, none of the components we reviewed had implementing guidance for other types of interagency contracts. While DHS acquisition officials acknowledge the need to manage the risks of interagency agreements, some do not see other types of interagency contracting, such as the GSA schedules and GWACs, as needing the same type of attention and believe sufficient guidance is available in the FAR. In fiscal year 2005, the three components we reviewed spent a total of $832 million through GSA schedules, GWACs, and other interagency contracts (see table 1). This is a 53 percent increase over the prior year. We have previously reported that use of interagency contracts demands a higher degree of business acumen and flexibility on the part of users and administrators than in the past, and acquisition officials need sufficient training and expertise to ensure the proper use of these types of contracts in an increasingly complex procurement environment. During our review, we identified several examples that showed that DHS may not have obtained a good value for millions of dollars in spending and indicated a need for improved training and expertise (see table 2). Several contracting officials stated that additional training is needed in the use of interagency contracts but that there was not much training available. In addition, other contracting officials told us that they were not aware of the range of available alternatives for interagency contracting. To ensure the proper use of all types of interagency contracts, other large procuring agencies, including DOD and the Department of Energy, have issued guidance to supplement the FAR and have emphasized specialized training. DOD is the largest user of other agencies’ contracts and the Department of Energy reported that it spent about $1.7 billion on other agencies’ contracts in fiscal year 2005—a substantial amount, but less than DHS. For example, DOD issued special guidance to ensure that proper procedures and management practices are used when using other agencies’ contracts including GSA schedules. The guidance requires DOD acquisition personnel to evaluate, using specific criteria, whether using a non-DOD contract for a particular purchase is in the best interest of the department. The criteria include the contract’s ability to satisfy the requirements in a timely manner and provide good value. DOD’s guidance also emphasizes using market research to help identify the best acquisition approach to meet the requirement and states that the contracting officer should document this research. The Department of Energy also has issued guidance addressing the proper use of GSA schedules and GWACs. This guidance emphasizes that these contracts are not to be used to circumvent agency regulations and that the contracting officer should ensure that the original order and all future orders are within the scope of the contract. In the case of the GSA schedules, the contracting officer should seek and document advice from GSA’s contracting officer on the proper use of the schedules whenever an issue is in doubt. In 2004, GSA took a step toward improving the management of GSA contracts and services by implementing the “Get It Right” program in part to secure the best value for federal agencies, improve education and training of the federal acquisition workforce on the proper use of GSA contracts and services, and ensure compliance with federal acquisition policies, regulations, and procedures. As part of the program, DOD and GSA have partnered to offer updated training on the proper use of GSA schedules. In addition, the Department of Energy has instituted training to emphasize the proper use and the need for planning when using the GSA schedules and GWACs. Interagency contracts are intended to offer a simplified procurement process whereby users commonly rely on planning that has already been conducted by the agency that established the contract to ensure that the prices are competitive. However, our recent work, as well as the work of others, has found that not all interagency contracts provide good value when considering both timeliness and cost. This suggests the need for evaluating the selection of an interagency contract. According to DHS contracting officials the benefits of speed and convenience—not total value including cost—have often driven decisions to choose interagency contracting vehicles. As of July 2005, DHS has required an analysis of alternatives for all purchases. Of the 17 cases in our review, this analysis was only required for the four interagency agreements. None of these interagency agreements indicated that the required analysis was conducted. Without an evaluation of interagency contracting alternatives, DHS users cannot be sure they are obtaining a good value. A sense of urgency has prevailed in DHS’ acquisition decision-making process, according to officials from the Office of Inspector General. For example, one official said that expediting program schedules and contract awards limits time available for adequate procurement planning, which can lead to higher costs, schedule delays, and systems that do not meet mission objectives. Eight of the 16 contracting officers we interviewed at OPO, CBP, and Coast Guard told us that using interagency contracts was a quick and convenient way to acquire needed products and services. A few DHS contracting officers felt that interagency contracts—in particular, GSA schedules—were the only viable alternatives given time constraints. In some cases, officials told us that it could take 4 to 6 months to establish and obtain goods and services through an in-house contract. In other cases, officials stated that purchase requests were received too close to the end of the fiscal year to use anything other than an interagency contract. None of the contracting officials said they chose to use interagency contracts because they also provided good value to DHS in terms of total cost. Interagency contracts are designed to be convenient to use and require less planning than entering into a full and open competition for a new contract, and users commonly rely on planning that has already been conducted by the agency that established the contract. However, we found that GSA schedule prices may not always be the most competitive, and agencies do not always obtain the required competition when using the schedules, thus, there is no assurance that these contracts are providing good value. In another review, we found that fees charged by the agency that provides the contracting service may not make these contracts cost- effective in some cases. Purchasing agencies also sometimes pay fee on top of fee for the use of another agency’s contract because servicing agencies may be using other agencies’ contracts—including GSA schedules—to make purchases. Fees charged for the use of GWACs also range between 0.65 and 5 percent. Given these concerns, evaluating the selection of an interagency contract is a sound management practice used by other large agencies. Pursuant to DHS acquisition policy, purchases made through interagency agreements require an analysis of alternatives to determine that the approach is in the government’s best interest; however, in the four cases we reviewed that fell under this requirement, there was no indication that this analysis was performed. In one case, CBP used FedSim, one of GSA’s contracting service providers, to place an order for $9 million for information technology support for systems security. In another case, CBP transferred $5 million to a franchise fund for the purchase of license plate readers. In the two remaining cases, OPO used FedSim to place orders totaling about $45 million against one contract to provide information technology support for the Homeland Secure Data Network. In these examples, there was little evidence that DHS users determined whether this was the best method for acquiring the needed services. These findings are consistent with our March 2005 review, in which we did not find an analysis of alternatives in 94 percent of the cases where it was required. Recent internal reviews at OPO and CBP cited similar findings in which evidence that a determination of findings or an analysis of alternatives was conducted was missing. In our review of 17 cases, we also found several examples where contracting officers placed orders to fulfill what were perceived to be critical needs, for convenience without comparing alternatives, or to spend funds at the end of the fiscal year without obtaining competing proposals. While an analysis of alternatives was not required in most of these cases, performing such an analysis could have helped DHS users to address some of the known concerns about these types of contracts to ensure that they obtained good value for the department (see table 3). As of July 2005 DHS has required an analysis of alternatives for all acquisitions, including all types of interagency contracts. DHS policy now states that all acquisition plans must include an analysis of alternatives including a discussion of why the acquisition process was chosen and the processes considered. The guidance states that the plan must contain information about the type of contract selected. However, the guidance does not include factors to consider or specific criteria for making a good choice among alternative contracting options. We have found that some agencies have established factors to consider in making this decision. For example, DOD and the Department of Energy have established factors that incorporate considerations of value, policy and regulatory requirements, customer needs, and administrative responsibilities. Following are some of the factors these agencies use: Value: cost (including applicable fees or service charges); whether using an interagency contract is in the best interest of the department. Policy and regulatory requirements: departmental funding restrictions; departmental policies on small business, performance- based contracting, and competition. Customer needs: schedule; scope of work; unique terms, conditions and requirements. Contract administration: including oversight, monitoring, and reporting requirements. Although DHS’ spending through interagency contracting totals billions of dollars annually and increased by 73 percent in the past year, the department does not systematically monitor its use of these contracts to assess whether this method for acquiring goods and services is being properly managed and provides good outcomes for the department. While OCPO has established a framework for an acquisition oversight program, the program is not designed to assess the outcomes of different contracting methods including interagency contracting. According to officials, DHS’ acquisition oversight program has been hindered by limited resources and authority. DHS does not systematically monitor spending on its interagency contracts, which totaled $6.5 billion in fiscal year 2005—37 percent of DHS’ procurement spending for that year. This type of monitoring could provide DHS with useful information to assess its use of this contracting method. For example, as part of its strategic sourcing initiative, DHS officials said they reviewed the component’s use of information technology and telecommunications contracts and determined that the department could achieve savings of $22.5 to $45 million in fees and reduced prices by establishing its own departmentwide contracts. However, DHS does not have available information to make comparable assessments for interagency contracts. For example, DHS officials were not able to readily provide data on the amounts spent through different types of interagency contracts. To respond to our request for information, OCPO prepared a special report on the use of GSA schedules and GWACs. For information on interagency agreements, OCPO had to request data from components. Ultimately, however, we had to compile a summary and clarify information obtained from components. DHS also does not collect data on the amount of service fees paid to other agencies for the use of contracting services or vehicles regarding interagency contracting, such as the amount of service fees paid to other agencies, and the components, which pay the fees, also do not collect this data. In prior work in this area, we have found that these fees can range from less than 1 percent to 8 percent. In March 2005, we found that OPO, the largest user of interagency contracts among the components, alone paid $12.9 million in service fees in fiscal year 2004. Given that the volume of DHS’ interagency contracting has increased by $2.7 billion, or about 73 percent, since fiscal year 2004, it is likely that the fees paid also have increased substantially. This lack of data is not unique to DHS. Although the need to collect and track data on interagency contracting transactions has become increasingly important governmentwide, there is no governmentwide system to collect this data. In fact, the Office of Management and Budget has an effort underway to collect basic information on interagency contracting from all federal agencies. While each of the components we visited has established its own internal reviews to evaluate contracting practices, including the use of interagency contracts, these reviews are compliance-based and are not designed to evaluate the outcomes of interagency contracting. For example, OPO, which has taken a comprehensive approach, established procedures for reviewing and approving procurement actions. The review includes an assessment of the documentation for compliance with acquisition regulations or policies; soundness of the acquisition strategy; use of business judgment; and completeness, consistency, and clarity. OPO also had a study completed to determine whether its contracts, task orders, interagency agreements, and other transactions were awarded and administered in compliance with procurement laws, regulations, and internal DHS and OPO operating policies and procedures. While the review found that much improvement was needed to comply with policies and procedures, it was not designed to address areas such as timeliness, total cost including price and fees paid, and customer service to determine whether a particular contract method resulted in the best outcome. In December 2005, OCPO issued a policy that provides a framework for a departmentwide acquisition oversight program. However, the framework does not evaluate the outcomes of different contracting methods, including interagency contracting, to determine whether the department obtained good value. Additionally, the Chief Procurement Officer lacks the authority needed to ensure the department’s components comply with its procurement policies and procedures that would help to establish an integrated acquisition function. The framework includes four key reviews (see table 4). According to DHS officials, the acquisition planning review was operational as of August 2006, and an on-site review was ongoing at the Federal Emergency Management Agency. DHS plans to implement the full program in fiscal year 2007. According to OCPO officials, while DHS expects to track interagency contracting through this framework, it will not gather data to determine whether these contracts were used effectively. For example, through the operational status reviews, DHS plans to track the number and dollar value of orders placed using interagency agreements and GSA schedules and GWACs. However, these reviews will not collect data on cost including the price of goods and services and fees paid, timeliness, or customer service, that would help them to evaluate whether specific interagency contracts were a good value. In addition, the Chief Procurement Officer, who is held accountable for departmentwide management and oversight of the acquisition function, lacks the authority and has limited resources to ensure compliance with acquisition policies and processes. As of August 2006, according to OCPO officials, only five staff were assigned to departmentwide oversight responsibilities for $17.5 billion in acquisitions. According to OCPO officials, their small staff faces the competing demands of providing acquisition support for urgent needs at the component level. As a result, they have focused their efforts on procurement execution rather than oversight. Officials also noted that limited resources have delayed the oversight program’s implementation. DHS’ acquisition function was structured to rely on cooperation and collaboration among DHS components to accomplish the department’s goals. While this structure was intended to make efficient use of resources departmentwide, it has limited the Chief Procurement Officer’s ability to effectively oversee the department’s acquisitions, manage risks, and has ultimately wasted time and other resources. In our prior work, we have found that in a highly functioning acquisition organization, the chief procurement officer is in a position to oversee compliance with acquisition policies and processes by implementing strong oversight mechanisms. In March 2005, we recommended that OCPO be provided sufficient enforcement authority and resources to provide effective oversight of DHS’ acquisition policies and procedures. In a 2005 review of the department’s organization, the Secretary focused on mission initiatives and, as of August 2006, has not changed the structure of the operational functions to provide additional authority to the Chief Procurement Officer. One of the largest procuring agencies in the federal government, DHS relies on contracts for products and services worth several billions of dollars to meet its complex homeland security mission. Effective acquisition management must include sound policies and practices for managing the risks of large and rapidly increasing use of other agencies’ contracts. While the use of these types of contracts provides speed and convenience in the procurement process, the agencies that manage the contracts and DHS users have not always adhered to sound contracting practices. Guidance and training that could help DHS to address risks is not in place; planning was not always conducted; and adequate monitoring and oversight were not performed. While DHS has developed a framework for an oversight program, until such oversight is in place, DHS cannot be sure that taxpayer’s dollars are being spent wisely and purchases are made in the best interest of the department. While the challenges to effective management of an acquisition function in any organization with a far-reaching mission are substantial, these challenges are further complicated at DHS by an organizational structure in which the Chief Procurement Officer lacks direct authority over the components. Without such authority, the department cannot be sure that necessary steps to implement improvements to its acquisition function will be taken. To improve the department’s ability to manage the risks of interagency contracting, we recommend that the Secretary of Homeland Security consider the adequacy of the Office of the Chief Procurement Officer’s resources and implement the following three actions: develop consistent, comprehensive guidance, and related training to reinforce the proper use of all types of interagency contracts to be followed by all components; establish, as part of the department’s planning requirement for an analysis of alternatives, criteria to consider in making the decision to use an interagency contract; and implement oversight procedures to evaluate the outcomes of using interagency contracts. Because the Secretary has not taken action to ensure departmentwide acquisition oversight, Congress should require the Secretary to report on efforts to provide the Chief Procurement Officer with sufficient authority over procurement activities at all components. We provided a draft of this report to DHS for review and comment. In written comments, DHS concurred with all of our recommendations and provided information on what action would be taken to address them. The department’s comments are reprinted in appendix III. Regarding the recommendation for guidance and training to reinforce the proper use of all interagency contracts, DHS stated that it will issue a revised management directive in the near future. This directive will require the reporting of data on interagency agreements. DHS also will issue additional direction to the components on reporting the use of other types of interagency contracts. With regard to training, the OCPO will introduce specific training with respect to all types of interagency contracting for all contracting personnel during fiscal year 2007. With regard to establishing criteria to consider in making the decision to use an interagency contract, DHS will revise the acquisition planning guide to address this recommendation. With regard to implementing oversight procedures to evaluate the outcomes of using interagency contracts, DHS plans to incorporate oversight procedures assessing the proper use of interagency contracts and agreements into its acquisition oversight program. Concerning the overall use of interagency contracts, the department’s comments stated that it is the goal of the OCPO to reduce the number and value of contracts awarded through the use of interagency contracts or agreements. This will be accomplished in part through the use of new departmentwide contracts for information technology equipment and services. We believe this is a positive step toward improving DHS’ contract management. In responding to the Matter for Congressional Consideration that the Secretary report on efforts to provide the Chief Procurement Officer with sufficient authority over procurement activities, DHS noted some steps that the Secretary has taken to improve acquisition oversight. revised the investment review process, placing the Chief Procurement Officer in a key position to review and provide oversight of the Department’s most critical programs; supported an increase of 25 OCPO positions to improve acquisition and directed the Chief Procurement Officer to work with all component heads to report on departmentwide progress in key acquisition areas. While these actions should help, they do not provide the Chief Procurement Officer with sufficient authority to ensure effective oversight of DHS’ acquisition policies and procedures, and we continue to believe that the Congress should require the Secretary to report on efforts to address this lack of authority. We are sending copies of this report to the Secretary of the Department of Homeland Security, and to other interested agencies and congressional committees. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report or need additional information, please contact me at (202) 512-4841 ([email protected]). Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other staff making key contributions to this report were Amelia Shachoy, Assistant Director; Greg Campbell; Christopher Langford; Eric Mader; Bill McPhail; Russ Reiter; Karen Sloan; and Karen Thornton. To determine the level of interagency contracting at the Department of Homeland Security (DHS), we requested data from each component on fiscal year 2005 purchases made through all types of interagency contracts. We compiled a summary of purchases made through interagency agreements, the General Service Administration’s schedules and governmentwide acquisition contracts (GWAC) from the individual reports we received from each component. We found that the Office of Procurement Operations (OPO), Customs and Border Protection (CBP), and Coast Guard were the largest users of interagency contracts in fiscal year 2005. Based on a review of this data, we selected 17 cases, totaling $245 million. Interagency contracting actions for these components represented a sample of GSA schedule, GWAC, and interagency transactions made through fee-for-service contracting providers. See table 5. The 17 cases were selected to represent procurement actions of $5 million or more at three DHS components. Because our findings included similar problems across these activities, we believe they represent common problems in DHS’ procurement process. To assess the reliability of this data, we compared the data obtained from DHS to the data maintained in the Federal Procurement Data System-Next Generation (FPDS-NG). Based upon the comparison, we determined that the data were sufficiently reliable for our purposes. To assess the extent to which DHS manages the risks of interagency contracting, we reviewed guidance and oversight at the departmental level and at the three components in our sample—OPO, CBP and Coast Guard, and we interviewed officials in the Office of the Chief Procurement Officer (OCPO) and senior officials of the components under review. To determine how other large agencies address the management risks of interagency contracting, we reviewed relevant guidance and training at the Departments of Defense and Energy. We also reviewed relevant GAO and Inspector General reports. To assess DHS planning for the use of interagency contracts, we conducted fieldwork at CBP’s National Acquisition Center in Indianapolis, Indiana; National Data Center in Springfield, Virginia; and at the Coast Guard’s procurement office in Norfolk, Virginia, and reviewed contract files and completed a data collection instrument for each of the 17 cases we selected. We also interviewed the contracting officer, program manager and Contracting Officer’s Technical Representative to discuss each case. In conducting our review, we identified the reasons for using interagency contracts and the reasons for choosing a particular interagency contract. We performed our review between February and August 2006 in accordance with generally accepted government auditing standards.
The Department of Homeland Security (DHS) has some of the most extensive acquisition needs within the federal government. In fiscal year 2005, DHS spent $17.5 billion on contracted purchases, $6.5 billion, or 37 percent, of which was through the use of other agencies' contracts and contracting services, a process known as interagency contracting. While these types of contracts offer the benefits of efficiency and convenience, in January 2005, GAO noted shortcomings and designated the management of interagency contracting as a governmentwide high-risk area. Given the department's critical national security mission and the results of our earlier work, GAO reviewed the extent to which DHS manages the risks of interagency contracting and assessed DHS' guidance, planning, and oversight of interagency contracting. DHS has developed guidance on how to manage the risks of some but not all types of interagency contracts. The department has guidance for interagency agreements--the largest category of interagency contracting at the department--but does not have specific guidance for using other types of contracts such as the General Services Administration (GSA) schedules and governmentwide acquisition contracts (GWAC), which amounted to almost $1.5 billion in fiscal year 2005. Moreover, in some cases we found users may have lacked expertise that could be addressed through guidance and training on the use of these types of contracts. DHS did not always consider alternatives to ensure good value when selecting among interagency contracts. While this contracting method is often chosen because it requires less planning than establishing a new contract, evaluating the selection of an interagency contract is important because not all interagency contracts provide good value when considering timeliness and cost. As of July 2005 DHS has required planning and analysis of alternatives for all acquisitions. In this review, we found that in all four cases for which an analysis of alternatives was required, it was not conducted. DHS officials said benefits of speed and convenience--not total value including cost--have often driven decisions to choose these types of contracts. DHS does not systematically monitor its total spending on interagency contacts and does not assess the outcomes of its use of this contracting method. According to officials, DHS' acquisition oversight program has been hindered by limited resources and authority. As of August 2006, the Office of the Chief Procurement Officer had five staff assigned to departmentwide oversight responsibilities for $17.5 billion in acquisitions. In March 2005, GAO recommended that the Chief Procurement Officer be provided sufficient authority to provide effective oversight of DHS' acquisition policies and procedures. Without this authority, DHS cannot be certain that acquisition improvements are made.
OSHA is responsible for enforcing the provisions of the Occupational Safety and Health Act of 1970 for about half the states; the remaining 26 states have been granted authority to set and enforce their own safety and health standards under a state plan approved by OSHA. At present, 22 of these 26 states enforce occupational safety and health provisions under a state plan covering all worksites, and have their own VPP programs. The other 4 states have plans covering only public sector employer worksites; VPP sites in these 4 states are part of OSHA’s federally managed VPP. To help ensure compliance with federal safety and health regulations and standards, OSHA conducts enforcement activities and provides compliance assistance to employers. Enforcement represents the preponderance of agency activity and includes safety and health inspections of employer worksites. Among its compliance assistance efforts, OSHA established the VPP in 1982 to recognize worksites with safety and health systems that exceed OSHA’s standards. A key requirement for participation in the VPP is that worksites have low injury and illness rates compared with the average rates for their respective industries. The VPP is divided into three programs (see table 1): the Star, Merit, and Star Demonstration programs. The Star program has the most stringent requirements because it is for worksites with exemplary safety and health systems that successfully protect employees from fatality, injury, and illness. OSHA’s Directorate of Cooperative and State Programs—the national office—oversees the VPP activities of each of its 10 regional and 80 area offices. Each regional office has a regional administrator, who coordinates all of the region’s activities, including the VPP, and a VPP manager, who implements and manages the program. The VPP manager conducts outreach to potential VPP sites and encourages participants to continually improve their safety and health systems. In addition, the VPP manager coordinates the region’s activities related to the program, such as reviews of applications submitted by potential sites and on-site reviews of VPP sites. Employer worksites apply to OSHA to participate in the VPP. They must meet a number of requirements, including having an active safety and health management system that takes a systems approach to preventing and controlling workplace hazards. As shown in figure 1, OSHA has defined four basic elements of a comprehensive safety and health management system. These requirements must be in place for at least 1 year. In addition, there must be no ongoing enforcement actions, such as inspections, at the worksites or willful violations cited by OSHA within the 3-year period prior to the site’s initial application to participate in the VPP. VPP sites are also required to have injury and illness rates below the average rates for their industries published by Bureau of Labor Statistics. These rates must be below the average industry rates for 1 of the most recent 3 years. VPP sites are required to report their injury and illness rates to OSHA’s regional offices annually. The VPP managers review this information and send summary reports to the national office. For each calendar year, the national office compiles a summary report of injury and illness rates for VPP sites participating in the program. OSHA determines whether worksites are qualified to participate in the VPP through its approval process, which includes an on-site review of each worksite. According to OSHA guidance, the regional offices are required to conduct an on-site review of each potential VPP site to ensure that the four elements are in place and to determine how well the site’s safety and health management system is working. As part of these reviews, the regions are required to verify the sites’ injury and illness rates, interview employees and management, and walk through the facilities. This initial on-site review usually lasts about 4 days and involves approximately three to five OSHA staff, according to OSHA’s VPP policies. OSHA also uses volunteers from other VPP sites—Special Government Employees who have been trained by OSHA—to conduct some portions of these reviews. OSHA’s national office is responsible for the initial approval of all new VPP sites. VPP sites in the Star program must also be reapproved every 3 to 5 years after an on-site review is conducted by the region. OSHA’s approval process is outlined in table 2. Once they have been approved, VPP sites must commit to continuously improving the safety and health of their worksites, maintaining low injury and illness rates, and reporting annually to OSHA on the status of their safety and health systems. The VPP sites’ annual reports detail their efforts to continuously improve and detail the sites’ injury and illness rates. OSHA’s regional offices review these reports to ensure that the VPP sites’ injury and illness rates have not increased beyond the program’s requirements. According to OSHA’s VPP Policies and Procedures Manual, OSHA must request that a site withdraw from the VPP if it determines that the site no longer meets the requirements for VPP participation. OSHA may also terminate a site for failure to maintain the requirements of the program. The national office is responsible for collecting the injury and illness data reported annually by VPP sites to the regions. If VPP sites’ 3-year average rates rise above the average rates for their industries published by the Bureau of Labor Statistics, the regions must place the site on a rate-reduction plan if an on-site review is not conducted that year or must place the site in a 1-year conditional status if an on-site review is conducted. The regions must also notify the national office of actions they take in response to incidents, such as fatalities and serious injuries, at VPP sites. The regions are required to review sites’ safety and health systems after such incidents to determine (1) whether systemic changes are needed to prevent similar incidents from occurring in the future and (2) whether the site should remain in the program. The regions may also conduct on-site reviews of VPP sites if they determine that the incidents were related to deficiencies in the sites’ safety and health management systems. The decision to recommend whether a site at which a fatality has occurred should remain in the program is left to the discretion of the regional administrator. The VPP has grown steadily since its inception, with the number of employer worksites in the program more than doubling—from 1,039 sites in 2003 to 2,174 sites in 2008. During this period, the number of sites in the federally managed VPP, representing over two-thirds of all VPP sites, increased at a similar rate as the number of sites in the state managed programs. In 2003, there were 734 sites in the federal VPP and 305 in the state managed VPP. By the end of 2008, both the federal and the state programs had more than doubled to 1,543 and 631, respectively. (See fig. 2.) Although the industries represented in the VPP did not change significantly from 2003 to 2008, there were substantial increases in certain industries. The largest industry in the VPP was the chemical industry, which accounted for a 43 percent increase in the number of VPP sites, from 208 in 2003 to almost 300 in 2008. The motor freight transportation industry, which had only 20 sites in 2003, grew tenfold to just over 200 sites in 2008, due in part to the growth in the number of Postal Service sites. In addition, the number of sites in the electric, gas, and sanitary services industries increased from about 50 sites to more than 200 during the same period. See figure 3 for a comparison of the largest industries represented in the VPP in 2003 and 2008. While 4 federal worksites—including the Tobyhanna Army Depot in Tobyhanna, Pennsylvania, and the National Aeronautics and Space Administration Langley Research Center in Hampton, Virginia—have participated in the VPP since the late 1990s, the number of federal worksites increased to almost 10 percent of all VPP sites in 2008. At the end of 2008, almost 200 VPP sites were federal agencies or Postal Service sites. The majority of these sites—157—were post offices, processing and distribution centers, and other postal facilities, while most of the remaining sites were Department of Defense facilities, such as naval shipyards, Army depots, and Air Force facilities. In addition, from 2005 to 2008, 7 OSHA area offices in 1 region were approved as new VPP sites as a result of OSHA’s efforts to have all of its offices participate in the program so that they could be role models for the federal agencies. The average size—based on the number of employees—of VPP sites has become increasingly smaller in the last 5 years. From 2003 to 2008, the average number of employees at VPP sites decreased from 501 to 408. In addition, the median size of a VPP site decreased from 210 to 145 employees. As shown in figure 4, the proportion of VPP sites with fewer than 100 workers increased from 28 percent in 2003 to 39 percent in 2008. Across all VPP sites, the number of employees covered by the VPP has grown to over 885,000 workers. A key factor influencing growth of the VPP has been OSHA’s emphasis on expansion of the program. For example, in 2003, the Secretary of Labor for OSHA announced plans to expand eligibility for the VPP to reach a larger number of worksites. These plans included adding more federal sites, such as Department of Defense facilities and certain types of construction sites. OSHA’s national office has given each of its 10 regions targets for the number of new sites to be approved each year. While the regions did not always meet these targets from fiscal years 2003 to 2008, they generally increased the number of new sites each year, as shown in table 3. Several OSHA regional administrators told us that expanding the program beyond the current level of approved sites will be difficult, given their current resources. Another factor influencing the growth of the VPP is outreach efforts, including participants’ outreach to other employers and employers seeking out the program after hearing about it from OSHA or other employers. According to OSHA officials and VPP participants, outreach efforts focus on the positive benefits of the program, including improved productivity of workers at VPP sites and decreased costs, such as reductions in sites’ workers’ compensation insurance premiums due to lower injury and illness rates. Some employers, such as the Postal Service, also cite avoidance of the costs of workplace injuries—which the National Safety Council estimated as approximately $39,000 per year, per incident in 2007—as one of the benefits of participation. In addition, the national association of VPP participants, the Voluntary Protection Programs Participants’ Association, contributes to program growth through its mentoring program in which current participants help new sites meet the qualifications of the VPP. We interviewed employees from VPP sites, and their perspectives varied. Employees who supported the program told us that the benefits include having a more collaborative partnership between OSHA, management, and workers; establishing a “mindset of safety”; and addressing several safety problems at one worksite that workers had tried for several years to have corrected. Those who did not fully support the program included employees at VPP sites who told us that they recognized some of the benefits of the VPP, but that they had reservations about the program. For example, some employees were concerned that, after the application process and initial on-site review had been completed, sites may not maintain the high standards that qualified them for participation. Furthermore, some employees said that the injury and illness rates requirements of the VPP are used as a tool by management to pressure workers not to report injuries and illnesses. OSHA’s internal controls are not sufficient to ensure that only qualified worksites participate in the VPP. First, OSHA’s oversight is limited by the minimal documentation requirements of the program. Second, OSHA does not ensure that its regional offices consistently comply with its policies for the VPP. OSHA’s lack of a policy requiring documentation in the VPP files of actions taken by the regions in response to incidents, such as fatalities and serious injuries, at VPP sites limits the national office’s ability to ensure that regions have taken the required actions. OSHA’s VPP Manual requires regions to review sites’ safety and health systems after such incidents to determine whether systemic changes are needed to prevent similar incidents from occurring in the future and whether the site should remain in the program. However, the manual does not require the regions to document their decisions or actions taken in the VPP files, which would allow OSHA’s national office to ensure that the regions took the appropriate actions. When fatalities, accidents, or other incidents involving serious safety and health hazards occur at any VPP site, OSHA’s policy requires that enforcement staff conduct an inspection of the site. In these cases, the area director is required to notify the VPP manager and send a report of the inspection. The VPP manager is then required to report information on the incidents that occurred to the Assistant Secretary for Occupational Safety and Health, the Director of Cooperative and State Programs, and the regional administrator. The decision on whether to conduct an on-site review after such an incident is left to the discretion of the regional administrator based on the results of the enforcement inspection. These reports, however, are not required to be included in the VPP files maintained by the regions. OSHA has a draft policy that sets time frames for retention of documents in the VPP files, but the policy does not contain guidance regarding the types of actions that must be documented in the files. Some regional VPP officials told us that they have requested such guidance from OSHA’s national office, but the national office has not issued a directive on what information should be documented in the files or on how long it should be retained. The OSHA official responsible for overseeing the program did not agree with regional VPP officials, and stated that the VPP Manual addresses the documentation requirements. However, the manual does not require actions taken by the regions in response to fatalities and serious injuries to be documented in the VPP files. From our review of OSHA’s VPP files, we found that there was no documentation of actions taken by the regions’ VPP staff to (1) assess the safety and health systems of the 30 VPP sites where 32 fatalities occurred from January 2003 to August 2008 or (2) determine whether these VPP sites should remain in the program. We obtained information on VPP sites at which fatalities occurred during this period from OSHA’s national office. To determine what actions were taken in response to the fatalities, we interviewed regional VPP staff and reviewed the regions’ inspection and VPP files for the sites with fatalities. Although the actions taken by the regional VPP staff were not documented in the VPP files, we reviewed the inspection files and interviewed the VPP staff to determine the actions they took in response to the fatalities. The VPP managers told us that they placed 5 of the 30 sites on 1-year conditional status, and that 5 sites voluntarily withdrew from the VPP. OSHA allowed 17 of the sites to remain in the VPP—some in the Star program and some in the Merit program—until their next regularly scheduled on-site reviews. One of these sites had 3 separate fatalities over the 5-year period. Another site received 10 violations related to a fatality, including 7 serious violations and 1 violation related to discrepancies in the site’s injury and illness logs. OSHA allowed this site to continue to participate in the VPP as a Star site. Three sites had not been reviewed by the regional VPP staff because OSHA’s enforcement staff had not completed their investigations of the sites. As a result, sites that did not meet the definition of the VPP’s Star program to “successfully protect employees from fatality, injury, and illness” have remained in the program. OSHA’s oversight of the VPP is limited because it does not have internal controls, such as management reviews by the national office, to ensure that its regions consistently comply with VPP policies for verifying sites’ injury and illness rates and conducting on-site reviews. Although having relatively low injury and illness rates are key criteria for program participation, the regions do not always verify sites’ rates according to OSHA’s policies. For example, the VPP Manual requires that, prior to conducting an on-site review, the region must obtain written approval from the national office allowing access to medical information related to injuries and illnesses at the site. However, our review of the VPP files and information from OSHA’s national office showed that, for almost 80 percent of the cases, regions did not obtain such written approval prior to conducting their on-site reviews. As a result, the regions did not have access to workers’ medical records needed to verify sites’ injury and illness rates, and the national office had no assurance that the regions verified these rates as required. In addition, OSHA’s national office did not review the actions taken by the regions to ensure that they followed up when VPP sites’ injury and illness rates rose above the minimum requirements for the program. From our review of OSHA’s 2007 summary report of injury and illness rates for VPP sites, we found that, for 12 percent of the sites, at least one of their 3-year average injury and illness rates was higher than the average injury and illness rates for their industries. For example, one VPP site reported a 3- year average injury and illness rate of 10.0, which was 7.6 points higher than the industry average of 2.4. Similarly, another site’s 3-year average injury and illness rate was 7.5 points higher than the industry average. We found that this site’s injury and illness rate had also been above the industry averages for each of the previous 4 years, yet it remained in the VPP Star program. OSHA’s national office does not require regions to report information on actions taken to ensure that sites lower their injury and illness rates when these rates rise above the industry averages. The national office, therefore, cannot ensure that the regions take action as required. As a result, some sites that have not met a key requirement of the VPP have remained in the program. Finally, some regions conducted less comprehensive reviews of VPP sites than those required by the VPP Manual. In an effort to leverage its limited resources, OSHA permitted two regions to conduct abbreviated on-site reviews as part of a pilot program in which the regions were allowed to evaluate only one or two elements of sites’ safety and health management systems, rather than all four elements. From our review of the VPP files, we estimated that, from 2000 to 2006, OSHA conducted abbreviated on-site reviews of almost 10 percent of its sites. As a result, some sites for which OSHA reviewed only two of the four elements may not have met all of the minimum requirements to participate in the program. According to the OSHA official responsible for managing the VPP, the agency discontinued its use of these abbreviated reviews after learning from the pilot that it is difficult to isolate certain program elements, and that evaluating only one or two elements leaves out key aspects of the program because the four elements are interrelated. OSHA’s efforts to assess the performance of the VPP and evaluate its effectiveness are not adequate. First, OSHA has not developed performance goals or measures to assess the performance of the program. Second, OSHA contracted for a study of the VPP to evaluate its effectiveness, but the study was flawed. OSHA has not developed performance goals or measures for the VPP to assess the program’s performance. The Government Performance and Results Act of 1993 requires agencies to set goals and report annually on program performance by measuring the degree to which the program achieves those goals. OSHA officials told us that, while they have not established specific goals for the VPP, the best measure of program performance is that VPP participants consistently report average injury and illness rates that are about 50 percent below their industries’ average rates. However, these rates may not be the best measure of performance. First, our analysis of OSHA’s annual summary reports of injury and illness rates for 2003 through 2007 showed that, for 35 percent of the sites in our sample for which data were available, there were discrepancies between the injury and illness rates reported by the sites and the rates noted in OSHA’s regional on-site review reports for the same time periods. For example, OSHA’s 2007 summary report showed that one VPP site reported an injury and illness rate of zero, but OSHA found during its on-site review that the rate was actually 1.7 for the same period. Second, OSHA has not evaluated the impact of the VPP on sites’ injury and illness rates, such as comparing VPP sites’ injury and illness rates with those of similar sites that do not participate in the program. OSHA also does not use information reported annually by VPP sites to develop goals or measures that could be used to assess program performance. VPP participants are required to conduct annual self assessments of their sites and to report this information to OSHA. The reports are to contain a review of the site’s safety and health management system, including safety and health hazards identified and the steps taken to correct them; a description of any significant management changes that can affect safety and health at the site, such as changes in ownership; and information on benefits related to participation in the VPP, such as cost savings due to lower workers’ compensation insurance premiums, decreased turnover and absenteeism, and increased productivity. However, OSHA’s national office does not use the information from these reports because most of this information is maintained in the regional offices, and they are not required to send it to the VPP national office. In response to a recommendation in our 2004 report that the agency evaluate the effectiveness of the VPP, OSHA contracted with The Gallup Organization to study the effectiveness of the program—the results of which were reported in September 2005. As part of this study, OSHA identified two objectives that included (1) determining the impact of its outreach and mentoring programs on potential and new VPP sites’ safety and health systems and (2) determining changes in the VPP sites’ injury and illness rates due to their participation in the program. To obtain information for this study, The Gallup Organization sent a questionnaire to all VPP sites participating in the federally managed program. However, the study had significant design flaws. Specifically, the response rates by participants were low (46 percent overall, and 34 percent completed the questionnaire), and the data reported by participants were not validated. For example, a review of the sites’ mentoring and outreach efforts, which are not indicators of program performance, made up two-thirds of the report, and other factors that could have influenced the sites’ injury and illness rates were not considered or measured. Because of these limitations, we concluded that the report’s findings were not reliable or valid and could not be used to demonstrate the effectiveness of the VPP. In our discussions with OSHA officials, they acknowledged the limitations of the study, but said they have not conducted any additional evaluations of the VPP and have no plans to conduct future evaluations of the effectiveness of the program. Officials said they do not need to do so because the low injury and illness rates reported by VPP participants are the best measure of the program’s effectiveness. However, without a more reliable evaluation of the program, OSHA does not know whether the program is effectively meeting its objective of recognizing worksites with exemplary safety and health management systems that exceed OSHA’s standards. OSHA continues to expand the VPP, which adds to the responsibilities of staff who manage and maintain the integrity of the program and reduces the resources available to ensure that non-VPP sites comply with safety and health regulations and with OSHA’s standards. In the absence of policies that require its regional offices to document information regarding actions taken in response to fatalities and serious injuries at VPP sites, OSHA cannot ensure that only qualified sites participate in the program. In addition, some sites with serious safety and health deficiencies that contributed to fatalities have remained in the program, which has affected its integrity. Without sufficient oversight and internal controls, OSHA’s national office cannot be assured that the regional offices are following VPP policies. Finally, because OSHA lacks performance goals and measures to use in assessing the performance of the VPP, it continues to expand the program without knowing its effect on employer worksites, such as whether participation in the VPP has improved workers’ safety and health. To ensure proper controls and measurement of program performance, the Secretary of Labor should direct the Assistant Secretary for Occupational Safety and Health to take the following three actions: develop a documentation policy regarding information on follow-up actions taken by OSHA’s regional offices in response to fatalities and serious injuries at VPP sites; establish internal controls that ensure consistent compliance by the regions with OSHA’s VPP policies for conducting on-site reviews and monitoring injury and illness rates so that only qualified worksites participate in the program; and establish a system for monitoring the performance of the VPP by developing specific performance goals and measures for the program. We provided a draft of this report to the Secretary of Labor for comment. We received written comments from the Assistant Secretary for Occupational Safety and Health, which are reproduced in their entirety in appendix II. The agency also provided technical comments, which we incorporated in the report as appropriate. OSHA agreed with our recommendations to develop better documentation requirements and strengthen internal controls to ensure consistent compliance with VPP policies across its regions. Regarding our recommendation to develop performance goals and measures for the VPP to use in monitoring performance, OSHA stated that it would continue to identify and refine appropriate performance measures for the program. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to relevant congressional committees, the Secretary of Labor, and other interested parties. The report will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. To identify the number and characteristics of employer worksites in the Voluntary Protection Programs (VPP), we analyzed data in the Department of Labor’s Occupational Safety and Health Administration (OSHA) VPP database. We reviewed data in OSHA’s VPP database for all sites in the VPP—both those in the federally managed program and in the VPP programs managed by the states. We analyzed data on VPP participation activity from the inception of the program in 1982 through the end of calendar year 2008. Prior to our analysis, we assessed the reliability of the information in OSHA’s VPP database by interviewing OSHA officials; reviewing related documentation, including the data system user manual; and conducting electronic testing of the data. On the basis of our review of the database, we found that the data were sufficiently reliable to report the number and characteristics of participants in the VPP. To determine the factors that contributed to growth in program participation, we obtained information about the VPP from officials at OSHA’s national office and the 10 regional offices. To enhance our understanding of the VPP from the perspective of the participants, we interviewed employees, including union and nonunion employees at VPP sites as well as employees from sites that elected not to participate in the VPP. To determine the extent to which OSHA ensures that only qualified worksites participate in the VPP, we reviewed OSHA’s internal controls for the program and limited our review to VPP sites in the federally managed program that were part of the Star program. We reviewed sites in the federally managed program because they represent over 70 percent of the sites in the program—1,543 of the 2,174 sites—and because the policies and practices for the state managed programs differ from state to state. We reviewed sites in the Star program because they represented more than 95 percent of sites in the federally managed VPP at the time of our review, and because the Star program has the most stringent requirements. To assess OSHA’s internal controls, we compared OSHA’s VPP Policies and Procedures Manual with GAO’s Standards for Internal Control in the Federal Government. We also reviewed OSHA’s policies and procedures for the federal VPP, including (1) procedures for on-site reviews of VPP sites, (2) annual reporting requirements for VPP sites to report data to the regions, and (3) requirements for regional offices to report information to OSHA’s national office. To determine the extent to which OSHA complied with its procedures in approving initial and renewing VPP participants, we reviewed OSHA’s VPP files for a randomly selected, representative sample of VPP sites in the program as of April 2008. Estimated percentages derived from this sample have confidence intervals of no more than +/- 7 percent. The files, maintained by OSHA’s regional offices, contained reports of the regions’ on-site reviews of VPP sites. We reviewed the reports of the reviews conducted prior to the sites’ initial acceptance and, if they had been in the program long enough to be reapproved, the most recent review conducted. We reviewed the VPP files and interviewed officials at OSHA’s regional offices in Atlanta, Boston, Dallas, New York, and Philadelphia. We selected these sites to obtain a geographic range of regional offices with small, medium, and large numbers of VPP sites. We interviewed officials in the five remaining regional offices in Chicago, Denver, Kansas City, San Francisco, and Seattle by telephone and had them send the VPP files for their sites to us for review. To determine what actions OSHA took in response to fatalities at VPP sites, we asked OSHA’s national office for a list of all sites at which fatalities occurred from January 2003 to October 2008. The national office asked the regions to provide this information, and the national office provided this information to us. We reviewed the inspection and VPP files maintained by the regional offices for these sites and interviewed VPP managers about the actions taken by the regions in response to the fatalities. Finally, we reviewed other information provided by the regional offices to the national office, such as data on the injury and illness rates for each VPP site that are reported by the sites annually to OSHA and tracked by the national office on electronic spreadsheets. To assess the adequacy of OSHA’s efforts to assess the performance and effectiveness of the VPP, we reviewed its policies and procedures, performance and accountability reports, operating plans, and The Gallup Organization’s 2005 evaluation report of the VPP. We reviewed these documents relative to the guidelines in the Government Performance and Results Act of 1993. To verify the injury and illness rates reported by VPP sites to OSHA’s regions in the sites’ annual reports, we compared the data tracked by the national office on sites’ injury and illness rates with the rates reported in OSHA’s on-site reviews for the sites in our sample of 184 sites. We assessed the Gallup study on the basis of commonly accepted program evaluation standards. We conducted this performance audit from March 2008 through May 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Revae Moran, Acting Director, and Anna M. Kelley, Analyst in Charge, managed all aspects of the assignment. Kathleen Boggs, Richard Harada, Yumiko Jolly, and Summer Pachman made significant contributions to the report. In addition, Richard Brown, Doreen Feldman, Justin Fisher, Cindy Gilbert, Sheila R. McCoy, Kathleen van Gelder, Gabriele Tonsil, and Shana Wallace provided key technical and legal assistance.
The Department of Labor's Occupational Safety and Health Administration (OSHA) is responsible for ensuring workplace safety. OSHA has established a number of programs, including the Voluntary Protection Programs (VPP), that take a cooperative approach to obtaining compliance with safety and health regulations and OSHA's standards. OSHA established the VPP in 1982 to recognize worksites with exemplary safety and health programs. GAO was asked to review (1) the number and characteristics of employer worksites in the VPP and factors that have influenced growth, (2) the extent to which OSHA ensures that only qualified worksites participate in the VPP, and (3) the adequacy of OSHA's efforts to monitor performance and evaluate the effectiveness of the VPP. GAO analyzed OSHA's VPP data, reviewed a representative sample of VPP case files, and interviewed agency officials. The VPP has grown steadily since its inception in 1982, with the number of employer worksites in the program more than doubling--from 1,039 sites in 2003 to 2,174 sites in 2008. Although industries represented have not changed significantly, with the chemical industry having the largest number of sites in the VPP, the number of sites in the motor freight transportation industry--which includes U.S. Postal Service sites--increased tenfold from 2003 to 2008. The proportion of smaller VPP sites--those with fewer than 100 workers--increased from 28 percent in 2003 to 39 percent in 2008. Key factors influencing growth of the VPP have been OSHA's emphasis on expansion of the program and VPP participants' outreach to other employers. OSHA's internal controls are not sufficient to ensure that only qualified worksites participate in the VPP. The lack of a policy requiring documentation in VPP files regarding follow-up actions taken in response to incidents, such as fatalities and serious injuries, at VPP sites limits the national office's ability to ensure that its regions have taken the required actions. Such actions include reviewing sites' safety and health systems and determining whether sites should remain in the program. GAO reviewed OSHA's VPP files for the 30 sites that had fatalities from January 2003 to August 2008 and found that the files contained no documentation of actions taken by the regions' VPP staff. GAO interviewed regional officials and reviewed the inspection files for these sites and found that some sites had safety and health violations related to the fatalities, including one site with seven serious violations. As a result, some sites that no longer met the definition of an exemplary worksite remained in the VPP. In addition, OSHA's oversight is limited because it does not have internal controls, such as reviews by the national office, to ensure that regions consistently comply with VPP policies for monitoring sites' injury and illness rates and conducting on-site reviews. For example, the national office has not ensured that regions follow up as required when VPP sites' injury and illness rates rise above the minimum requirements for the program, including having sites develop plans for reducing their rates. Finally, OSHA has not developed goals or measures to assess the performance of the VPP, and the agency's efforts to evaluate the program's effectiveness have been inadequate. OSHA officials said that low injury and illness rates are effective measures of performance. These rates, however, may not be the best measures because GAO found discrepancies between the rates reported by worksites annually to OSHA and the rates OSHA noted during its on-site reviews. In addition, OSHA has not assessed the impact of the VPP on sites' injury and illness rates. In response to a recommendation in a GAO report issued in 2004, OSHA contracted with a consulting firm to conduct a study of the program's effectiveness. However, flaws in the design of the study and low response rates made it unreliable as a measure of effectiveness. OSHA officials acknowledged the study's limitations but had not conducted or planned other evaluations of the VPP.
Early childhood is a key period of development in a child’s life and an emphasized age group for which services are likely to have long-term benefits. Recent research has underscored the need to focus on this period to improve children’s intellectual development, language development, and school readiness. Early childhood programs serve children from infancy through age 5. The range of services includes education and child development, child care, referral for health care or social services, and speech or hearing assessment as well as many other kinds of services or activities. $4 billion), administered by HHS, and Special Education programs (approximately $1 billion), administered by Education. Head Start provides education and developmental services to young children, and the Special Education-Preschool Grants and Infants and Families program provides preschool education and services to young children with disabilities. Although these programs target different populations, use different eligibility criteria, and provide a different mix of services to children and families, there are many similarities in the services they provide. Figure 1 illustrates the federal agencies responsible for federal early childhood funding. Early childhood programs were included in the list of more than 30 programs our governmentwide performance and accountability report cited to illustrate the problem of fragmentation and program overlap.Virtually all the results that the government strives to achieve require the concerted and coordinated efforts of two or more agencies. However, mission fragmentation and program overlap are widespread, and programs are not always well coordinated. This wastes scarce funds, frustrates taxpayers, and limits overall program effectiveness. The Results Act is intended to improve the management of federal programs by shifting the focus of decision-making and accountability from the number of grants and inspection made to the results of federal programs. The act requires executive agencies, in consultation with the Congress and other stakeholders, to prepare strategic plans that include mission statements and goals. Each strategic plan covers a period of at least 5 years forward from the fiscal year in which the plan is submitted. It must include the following six key elements: a comprehensive mission statement covering the major functions and operations of the agency, a description of general goals and objectives for the major functions and operations of the agency, a discussion of how these goals and objectives will be achieved and the resources that will be needed, a description of the relationship between performance goals in the annual performance plan and general goals and objectives in the strategic plan, a discussion of key factors external to the agency that could affect significantly the achievement of the general goals and objectives, and a description of program evaluations used to develop the plan and a schedule for future evaluations. describe the means the agency will use to verify and validate its performance data. The act also requires that each agency report annually on the extent to which it is meeting its annual performance goals and the actions needed to achieve or modify goals that have not been met. The first report, due by March 31, 2000, will describe the agencies’ fiscal year 1999 performance. The Results Act provides a valuable tool to address mission fragmentation and program overlap. The act’s emphasis on results implies that federal programs contributing to the same or similar outcomes are expected to be closely coordinated, consolidated, or streamlined, as appropriate, to ensure that goals are consistent and that program efforts are mutually reinforcing. As noted in OMB guidance and in our recent reports on the act, agencies should identify multiple programs within or outside the agency that contribute to the same or similar goals and describe their efforts to coordinate. Just as importantly, the Results Act’s requirement that agencies define their mission and desired outcomes, measure performance, and use performance information provides multiple opportunities for the Congress to intervene in ways that could address mission fragmentation. As missions and desired outcomes are determined, instances of fragmentation and overlap can be identified and appropriate responses can be defined. For example, by emphasizing the intended outcomes of related federal programs, the plans might allow identification of legislative changes needed to clarify congressional intent and expectations or to address changing conditions. As performance measures are developed, the extent to which agency goals are complementary and the need for common performance measures to allow for crossagency evaluations can be considered. For example, common measures of outcomes from job training programs could permit comparisons of programs’ results and the tools used to achieve those results. As continued budget pressures prompt decisionmakers to weigh trade-offs inherent in resource allocation and restructuring decisions, the Results Act can provide the framework to integrate and compare the performance of related programs to better inform choices among competing budgetary claims. The outcome of using the Results Act in these ways might be consolidation that would reduce the number of multiple programs, but it might also be a streamlining of program delivery or improved coordination among existing programs. Where multiple programs remain, coordination and streamlining would be especially important. Multiple programs might be appropriate because a certain amount of redundancy in providing services and targeting recipients is understandable and can be beneficial if it occurs by design as part of a management strategy. Such a strategy might be chosen, for example, because it fosters competition, provides better service delivery to customer groups, or provides emergency backup. Education and HHS’s ACF—the two agencies that are responsible for the majority of early childhood program funds—addressed early childhood programs in their strategic and 1999 performance plans. Although both agencies’ plans generally addressed the required elements for strategic and performance plans, Education’s plans provided more detailed information about performance measures and coordination strategies. The agencies in their 2000 plans similarly addressed the required elements for performance plans. However, strategies and activities that relate to coordination were not well defined. Although agencies state that some coordination occurs, they have not yet fully described how they will coordinate their efforts. The Education plan provided a more detailed description of coordination strategies and activities for early childhood programs than the ACF plan, including some performance measures that may cut across programs. The ACF plan described in general terms the agency’s plans to coordinate with external and internal programs dealing with early childhood goals. Yet the information presented in the plans did not provide the level of detail, definition, and identification of complementary measures that would facilitate comparisons of early childhood programs. research on early brain development reveals that if some learning experiences are not introduced to children at an early age, the children will find learning more difficult later; children who enter school ready to learn are more likely to achieve high standards than children who are inadequately prepared; and high-quality preschool and child care are integral in preparing children adequately for school. Early childhood issues were discussed in the plan’s goal to “build a solid foundation for learning for all children” and in one objective and two performance indicators (see table 1). The 1999 performance plan, Education’s first performance plan, followed from the strategic plan. It clearly identified programs contributing to Education’s early childhood objective and set individual performance goals for each of its programs. Paralleling the strategic plan, the performance plan specified the core strategies Education intended to use to achieve its early childhood goal and objective. Among these were interagency coordination, particularly with HHS’s Head Start program. According to Education’s strategic plan, this coordination was intended to ensure that children’s needs are met and that the burden on families and schools working with multiple providers is reduced. The performance plan also said that Education would work with HHS and other organizations to incorporate some common indicators of young children’s school readiness into their programs. It would also work with HHS more closely to align indicators of progress and quality between HHS’s Head Start program and its Even Start Family Literacy program—which has as part of its goal the integration of early childhood education, adult literacy or adult basic education, and parenting education. other federal agencies enables it to better serve program participants and reduce inefficiencies in service delivery. We said that although this first plan included a great deal of valuable information, it did not provide sufficient details, such as a more complete picture of intended performance across the department, a fuller portrayal of how its strategies and resources would help achieve the plan’s performance goals, and better identification of significant data limitations and their implications for assessing the achievement of performance goals. These observations apply to the early childhood programs as well. Without this additional detail, policymakers are limited in their ability to make decisions about programs and resource allocation within the department and across agencies. Education’s 2000 performance plan continues to demonstrate the department’s commitment to the coordination of its early childhood programs. Like the 1999 performance plan, the sections on early childhood programs clearly identified programs contributing to its childhood program objectives. It also contained new material highlighting the importance of the coordination of early childhood programs as a crosscutting issue, particularly with HHS. To facilitate collaboration, the department added a strategy to work with the states to encourage interagency agreements at the state level. It also added using the Federal Interagency Coordinating Council to coordinate strategies for children with disabilities and their families. At the same time, the department still needs to better define its objectives and performance measures for crosscutting issues. Unless the purpose of coordination activities is clearly defined and results in measurable outcomes, it will be difficult to make progress in the coordination of programs across agencies. development, safety, and well-being of children and youth”—and three objectives (see table 2). The ACF plan, however, did not always give a clear picture of intended performance of its programs and often failed to identify the strategies the agency would use to achieve its performance goals. ACF programs that contribute to each early childhood objective were identified, and several of these programs had individual performance goals. However, without a clear picture of intended program goals and performance measures for crosscutting early childhood programs, it will be difficult to compare programs across agencies and assess the federal government’s overall efficacy in fostering early childhood development. and external stakeholders in this area. However, it did not define how this coordination will be accomplished or the means by which the crosscutting results will be measured. Agency officials are able to describe numerous activities that demonstrate collaboration within the agency and with Education. The absence of that discussion in the plan, however, limits the value the Results Act could have to both improving agency management and assisting the Congress in its oversight role. Progress in coordinating crosscutting programs is still in its infancy, although agencies are recognizing its importance. Agency performance plans provide the building blocks for recognizing crosscutting efforts. Because of the iterative nature of performance-based management, however, more than one cycle of performance plans will probably be required in the difficult process of resolving program fragmentation and overlap. Mr. Chairman, this concludes my prepared statement. We would be happy to answer any questions that you or Members of the Subcommittee may have. Government Management: Addressing High Risks and Improving Performance and Accountability (GAO/T-OCG-99-23, Feb. 10, 1999). Head Start: Challenges Faced in Demonstrating Program Results and Responding to Societal Changes (GAO/T-HEHS-98-183, June 9, 1998). The Results Act: Observations on the Department of Education’s Fiscal Year 1999 Annual Performance Plan (GAO/HEHS-98-172R, June 8, 1998). The Results Act: An Evaluator’s Guide to Assessing Agency Annual Performance Plans (GAO/GGD-10.1.20, Apr. 1, 1998). Managing for Results: Observations on Agencies’ Strategic Plans (GAO/T-GGD-98-66, Feb. 12, 1998). Managing for Results: Agencies’ Annual Performance Plans Can Help Address Strategic Planning Challenges (GAO/GGD-98-44, Jan. 30, 1998). Child Care: Federal Funding for Fiscal Year 1997 (GAO/HEHS-98-70R, Jan. 23, 1998). Federal Education Funding: Multiple Programs and Lack of Data Raise Efficiency and Effectiveness Concerns (GAO/T-HEHS-98-46, Nov. 6, 1997). At-Risk and Delinquent Youth: Multiple Programs Lack Coordinated Federal Effort (GAO/T-HEHS-98-38, Nov. 5, 1997). Managing for Results: Using the Results Act to Address Mission Fragmentation and Program Overlap (GAO/AIMD-97-146, Aug. 29, 1997). The Results Act: Observations on the Department of Education’s June 1997 Draft Strategic Plan (GAO/HEHS-97-176R, July 18, 1997). The Government Performance and Results Act: 1997 Governmentwide Implementation Will Be Uneven (GAO/GGD-97-109, June 2, 1997). Early Childhood Programs: Multiple Programs and Overlapping Target Groups (GAO/HEHS-95-4FS, Oct. 31, 1994). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed how Congress can use the Government Performance and Results Act to facilitate agency performance plans to oversee early childhood programs, focusing on: (1) how the Results Act can assist in management and congressional oversight, especially in areas where there are multiple programs; (2) how the Departments of Education and Health and Human Services (HHS)--which together administer more than half of the federal early childhood program funds--addressed early childhood programs in their strategic and fiscal year 1999 and 2000 performance plans and the extent to which recent plans show progress in coordinating early childhood programs. GAO noted that: (1) Congress can use the Results Act to improve its oversight of crosscutting issues because the act requires agencies to develop strategic and annual performance plans that clearly specify goals, objectives, and measures for their programs; (2) the Office of Management and Budget has issued guidance saying that for crosscutting issues, agencies should describe efforts to coordinate federal programs contributing to the same or similar outcomes so that goals are consistent and program efforts are mutually reinforcing; (3) when GAO looked at the Education and HHS plans, it found that the plans are not living up to their potential as expected from the Results Act; (4) more specifically, while the fiscal year 1999 and 2000 plans to some extent addressed coordination, the departments have not yet described in detail how they will coordinate or consolidate their efforts; and (5) therefore, the potential for addressing fragmentation and duplication has not been realized, and GAO cannot assess whether the agencies are effectively working together on crosscutting issues.
Carrier strike groups are typically centered around an aircraft carrier and its air wing, and also include a guided missile cruiser; two guided missile destroyers; a frigate; an attack submarine; and one or more supply ships with ammunition, fuel, and supplies (such as food and spare parts). These groups are formed and disestablished on an as needed basis, and their compositions may differ though they contain similar types of ships. Figure 1 shows a carrier strike group sailing in a group formation as it prepares to participate in an exercise. Prior to the September 11, 2001, terrorist attacks, only those Navy ships and air squadrons at peak readiness were deployed overseas, usually for 6 months at a time. Most of the Navy’s remaining units were not available because they were in early stages of their maintenance or training cycles, or because the Navy did not have good visibility of the readiness of these units. This prompted the Chief of Naval Operations in March 2003 to task the Commander, Fleet Forces Command, to develop the Fleet Response Plan concept to enhance the Navy’s surge capability. The Chief of Naval Operations approved the concept in May 2003 and further directed the Commander, Fleet Forces Command, to be responsible and accountable for effectively implementing the plan. The Fleet Response Plan emphasizes an increased level of readiness and the ability to quickly deploy naval forces to respond to crises, conflicts, or homeland defense needs. The plan applies broadly to the entire fleet; however, it only sets specific requirements for carrier strike groups. For example, the plan calls for eight carrier strike groups to be ready to deploy within 90 days of notification. Six of them would be available to deploy within 30 days and the other two within 90 days. This is commonly referred to as the 6 + 2 goal. Under the Fleet Response Plan, the Navy has developed a surge capability schedule that it uses to manage and identify the level of training a ship has completed and its readiness to deploy. The schedule contains three progressive readiness goals: emergency surge, surge-ready, and routine deployable status. Each readiness goal specifies phases of training that must be completed to achieve the goal. To be placed in emergency surge status, a ship or an air squadron needs to have completed its unit-level phase training. Achieving surge-ready status requires the completion of integrated phase training. Attaining routine deployable status requires achievement of all necessary capabilities, completion of underway sustainment phase training, and certification of the unit for forward deployed operations. The surge capabilities schedule provides a readiness snapshot for each ship, allowing decision makers to quickly determine which ships are available to meet the needs of the mission. Figure 2 illustrates how the Navy notionally identifies the eight aircraft carriers available for surge deployments. The carriers numbered 1 through 6 are expected to be ready to deploy within 30 days notice. The carriers labeled “+1” and “+2” are expected to able to surge within 90 days notice. The six surge-ready carriers include two carriers on deployment (numbered 3 and 4), one carrier that is part of the forward deployed naval force based in Japan (number 6), and three carriers in the sustainment phase (numbered 1, 2, and 5). These six carriers are expected to have completed postdeployment depot-level maintenance and their unit-level phase training. The two additional surge carriers are expected to have completed depot-level maintenance but not to have completed unit-level phase training. The remaining four carriers are in the maintenance phase or deep maintenance. Based on the Navy’s experiences during the past 2 years, Fleet Forces Command has convened a cross-functional working group to develop a refined version of the Fleet Response Plan. This update, known as Fleet Response Plan-Enhanced, is intended to further define the Fleet Response Plan, modify terminology for progressive readiness states to better reflect their meaning, tie in elements such as a human capital strategy, and expand the focus of the plan beyond carrier strike groups to the entire Navy. It may also extend the Fleet Response Plan’s current employment cycle length of 27 months. The Fleet Response Plan-Enhanced is still under development at this time. The Navy’s management approach in establishing the Fleet Response Plan as its new readiness construct has not fully incorporated sound management practices needed to effectively guide, monitor, and assess implementation. Studies by several organizations have shown that successful organizations in both the public and private sectors use sound management practices to assist agencies in measuring performance, reporting results, and achieving desired outcomes. These practices provide management with a framework for effectively implementing and managing programs and shift program management focus from measuring program activities and processes to measuring program outcomes. Sound management practices include (1) establishing a coherent mission and integrated strategic goals to guide the transformation, including resource commitments; (2) setting implementation goals and a timeline to build momentum and show progress from day one; and (3) establishing a communication strategy to create shared expectations and report related progress. The Navy’s implementation of the Fleet Response Plan has included some aspects of these practices. For example, the Navy has established some strategic goals needed to meet the intent of the plan, such as the progressive readiness levels of emergency surge, surge-ready, and routine deployable status. The Navy also has established specific training actions to support these goals, such as that carrier strike groups must complete unit-level training to be certified as emergency surge-ready. However, other actions taken by the Navy do not fully incorporate these practices. For example, the Navy has identified the 6 + 2 surge capability as a readiness goal and performance measure for carrier strike groups, but no such goal was established for the rest of the fleet. The Navy also has some unofficial goals and performance measures regarding manning and maintenance, but these unofficial goals and performance measures have not been formally established. For example, briefings on the Fleet Response Plan state that the Navy desires and needs fully manned ships (i.e., manning at 100 percent of a ship’s requirement) for the program to be successful. Moreover, according to Navy officials, the Navy has not established milestones for achieving its results. In addition, 2 years after initiating implementation of the Fleet Response Plan, the Navy still does not have an official written definition of the Fleet Response Plan that clearly establishes a coherent mission and integrated strategic goals to guide the transformation, including resource commitments. This definition would describe the Fleet Response Plan’s total scope and contain guidance with formal goals and performance measures. The Navy recently has taken some action to address this area. In February 2005, the Navy directed the Center for Naval Analyses to conduct a study to develop formal definitions and guidance as well as identify goals and performance measures for the plan. However, it remains to be seen whether this study will be completed as planned by November 2005; if it will recommend developing and implementing sound management practices, such as goals, measures, milestones, and timelines; and whether any management improvement recommendations made in the study will be implemented by the Fleet Forces Command, the Navy command responsible for implementing the Fleet Response Plan. Without goals, performance measures, timelines, milestones, benchmarks, and guidance to help effectively manage implementation of the Fleet Response Plan and determine if the plan is achieving its goals, the Navy may find it more difficult to implement the Fleet Response Plan across the entire naval force. Moreover, despite the Navy’s unofficial goal that the Fleet Response Plan be budget neutral, as articulated in briefings and by senior leaders, the Navy has not yet clearly identified the resources needed to achieve its goals or provided a rationale for how these resources will contribute to achieving the expected level of performance. Navy officials have said that current operations and maintenance funding levels, as well as manning at 100 percent of required positions, have contributed to successful implementation of the Fleet Response Plan. However, officials do not know what level of manning or funding is actually required for program success over the long term to avoid any unintended consequences, such as greater amounts of deferred maintenance. According to Navy officials, it is difficult to attribute costs to the plan because there is no single budget line item that tracks the costs associated with the Fleet Response Plan. Without knowing the funding needed, the Navy may not be able to assess the impact of possible future changes in funding on implementing the plan. Furthermore, without a comprehensive plan that links costs with performance measures and outcomes, neither the Navy nor Congress may be able to determine if the Fleet Response Plan is actually achieving its unofficial goal of being budget neutral. Finally, the Navy also has not developed a comprehensive communications strategy that reaches out to employees, customers, and stakeholders and seeks to genuinely engage them in a two-way exchange, which is a critical step in successfully implementing cultural change or transformation. We looked for formal mechanisms that communicated the details of the Fleet Response Plan and spoke with personnel from carrier strike groups, aircraft carriers, air wings and an air squadron, one surface combatant ship, and other command staff. We found that while the Fleet Response Plan was communicated extensively to senior-level officers, and the Navy provided numerous briefings and messages related to the plan, communication and understanding of the plan did not flow through to the lower ranks. While the concept of the Fleet Response Plan is generally understood by some senior-level officials, many of the lower grade personnel on these ships were unaware of the scope, goals, and other aspects of the plan. In the absence of clear communication throughout the fleet via an overall communications strategy that could increase employee awareness of the Fleet Response Plan, its successful implementation could be impeded. Sound management practices, such as those noted above, were not fully used by the Navy because senior leaders wanted to quickly implement the Fleet Response Plan in response to the Chief of Naval Operations’ desires. However, without an overall management plan containing all of these elements to guide the implementation of such a major change, it may be difficult for the Navy and Congress to determine the extent to which the Fleet Response Plan is achieving the desired results, measure its overall progress, or determine the resources needed to implement the plan. The Navy has not fully tested and evaluated the Fleet Response Plan or developed lessons learned to identify the effectiveness of its implementation and success over time. The methodical testing, exercising, and evaluation of new doctrines and concepts is an established practice throughout the military to gain insight into how systems and capabilities will perform in actual operations. However, instead of methodically conducting realistic tests to evaluate the Fleet Response Plan, the Navy has tried to demonstrate the viability of the plan by relying on loosely linked events that were not part of an overall test and evaluation strategy, which impairs the Navy’s ability to validate the plan and evaluate its success over time. In addition, the Navy has not used its lessons learned system to share the results of its Fleet Response Plan tests or as an analytical tool to evaluate the progress of the plan and improve implementation, which limits the Navy’s ability to identify and correct weaknesses across the fleet. Methodically testing, exercising, and evaluating new doctrines and concepts is an important and established practice throughout the military. DOD has long recognized the importance of using tabletop exercises, war games, and experimentation to explore military doctrine, operational concepts, and organizational arrangements. Collectively, these tests and experiments can provide important insight into how systems and capabilities will perform in actual operations. U.S. Joint Forces Command, which has lead responsibility for DOD experimentation on new concepts of operation and technologies, states that its experimental efforts aim to foster military innovation and improvement by exploring, developing, and transferring new concepts and organizational ideas into operational reality. Particularly large and complex issues may require long-term testing and evaluation that is guided by study plans. Joint Forces Command’s Joint Warfighting Center has an electronic handbook that provides guidance for conducting exercises and lays out the steps in an exercise life cycle: design; planning; preparation; execution; and analysis, evaluation, and reports. The Army also has well-established guidance governing service studies, analyses, and evaluations that the Navy feels is representative of best practices for military operations research. This provides an important mechanism through which problems pertaining to critical issues and other important matters are identified and explored to meet service needs. As shown in figure 3, the Army’s process involves six major steps that create a methodical process for developing, conducting, documenting, and evaluating a study. Following a formal study process enables data evaluation and development of lessons learned that could be used to build on the existing knowledge base. In a roundtable discussion with the Fleet Forces Command on the rationale behind Summer Pulse 2004, the Navy’s major exercise for the Fleet Response Plan, a senior Navy official stated, “From the concept, … you need to exercise, … you need to practice, … you need to demonstrate it to know you got it right and what lessons are there to learn from how we did it.” Other governmental agencies, like GAO, and the private sector also rely on detailed study plans, or data collection and analysis plans, to guide the development of studies and experiments and the collection and analysis of data, and to provide a feedback loop that links the outcomes of the study or experiment event and subsequent analysis to the original goals and objectives of the study or event. GAO guidance states that data collection and analysis plans “should carry forward the overall logic of the study so that the connection between the data that will be collected and the answers to the study questions will become evident.” Recent Navy guidance also recognizes the need for a thorough evaluation of complex initiatives. In April 2005, the Navy issued a Study Planning and Conduct Guide assembled by the Navy Warfare Development Command. This guide stresses the importance of establishing a long- range plan for complex and novel problems and lays out the rationale for detailed study plans for exercises and experiments, as they establish a structure in which issues are explored and data are collected and analyzed in relation to the established goals or objectives for the event. Furthermore, the Navy’s guide notes that random, inadequately prepared events and a determination just to study the problem do not lead to successful resolution of problems that may arise in programs and concepts that the Navy is testing and evaluating. The Navy has not methodically conducted realistic tests of the Fleet Response Plan to demonstrate the plan’s viability and evaluate its progress and success over time, instead relying on loosely linked events and some routine data to demonstrate the viability of the plan. The events identified by the Navy as successful tests of the Fleet Response Plan are Summer Pulse 2004, the emergency deployment of the U.S.S. Abraham Lincoln, and Global War on Terrorism Surge 2005, but of these events only Summer Pulse 2004 was driven by the Fleet Response Plan with the intent of demonstrating that large numbers of ships could be surged. In addition, these events were not part of an overall test and evaluation strategy that yielded specific information from which to assess the value of the plan in increasing readiness and meeting the new 6 + 2 surge capability goal for carrier strike groups. Summer Pulse 2004 encompassed a number of previously scheduled deployments, exercises, and training events that took place between June and August of 2004. The intent of Summer Pulse 2004 was to demonstrate the Fleet Response Plan’s new readiness construct and the Navy’s ability to deploy multiple carrier strike groups of varying levels of readiness. However, Summer Pulse 2004 was not a methodical and realistic test of the Fleet Response Plan for three reasons. First, Summer Pulse 2004 did not follow best practices regarding study plans and the ability to evaluate the impact and outcomes of the plan. The Navy did not develop a formal study plan identifying study objectives, data collection requirements, and analysis, or produce a comprehensive after-event report describing the study’s findings. Navy officials have stated that the elements of a formal study plan were there for the individual deployments, exercises, and training events constituting Summer Pulse 2004, but were not brought together in a single package. While the Navy may have had the study elements present for the individual exercises, they were not directly linked to testing the Fleet Response Plan. Without such a comprehensive study plan and overall evaluation, there is no ability to discern potential impacts on fleet readiness, maintenance, personnel, and other issues that are critical to the Fleet Response Plan’s long-term success. Second, Summer Pulse 2004 was not a realistic test because all participating units had several months’ warning of the event. As a result, five carriers were already scheduled to be at sea and only two had to surge. Because six ships are expected to be ready to deploy with as little as 30 days’ notice under the plan and two additional carriers within 90 days, a more realistic test of the Fleet Response Plan would include no-notice or short-notice exercises. Such exercises conducted without advance notification to the participants would provide the highest degree of challenge and realism. Without such exercises, the Navy might not be able to realistically practice and coordinate a full surge deployment. Third, Summer Pulse 2004 was not a sufficient test because the Navy involved only seven carriers instead of the eight carriers called for in the plan. Therefore, it did not fully test the Navy’s ability to meet deployment requirements for the expected force. Another event cited by the Navy as evidence of the Fleet Response Plan’s success is the deployment of the U.S.S. Abraham Lincoln carrier strike group while it was in surge status in October 2004. Originally scheduled to deploy in the spring of 2005, the Lincoln was deployed early to support operations in the Pacific Command area of operation and provide aid to areas devastated by a tsunami in the Indian Ocean in December 2004. Navy officials said that the Fleet Response Plan enabled the Navy to identify a carrier to send to the Pacific and to quickly tailor its training package based on its progressive readiness status. The Navy touted this rapid response relief work by a strike group deployed during surge status as a Fleet Response Plan success story. We agree that the Lincoln carrier strike group was able to respond quickly. However, the extent to which this event realistically tested the Fleet Response Plan’s expectations for surging one carrier strike group is not known. As with Summer Pulse 2004, the Lincoln deployment was not a methodical test of the Fleet Response Plan because there was no plan to systematically collect or analyze data that would evaluate the outcomes of the Lincoln deployment against Fleet Response Plan-related study goals. The Navy also pointed to a third event, its recent Global War on Terrorism Surge 2005, as an indicator that the Fleet Response Plan works. The Global War on Terrorism surge was a response to a request for forces from which the Navy is looking to glean Fleet Response Plan-related information about what did and did not work when the ships return. However, this is not a good test of the Fleet Response Plan because there is no plan showing what specific data are being collected or what analytical approaches are being employed to assess the ships’ experiences. As of September 2005, no other events had been scheduled to further test and evaluate the Fleet Response Plan. The Navy has not developed the kind of comprehensive plans to test and evaluate the Fleet Response Plan as recommended by DOD and Navy guidance and best practices because Navy officials have stated that existing readiness reporting processes effectively evaluate the Fleet Response Plan’s success on a daily basis. They said after-action reports from training exercises and the Joint Quarterly Readiness Review assist with this function. Navy officials explained that they implemented the Fleet Response Plan the same way they had implemented the Inter- Deployment Training Cycle, the predecessor to the Fleet Response Plan’s Fleet Readiness Training Plan. While this may be true, the Inter- Deployment Training Cycle was focused on the specific training needed to prepare units for their next deployment, not for implementing a new readiness construct that emphasized surge versus routine deployments. Furthermore, the Inter-Deployment Training Cycle did not contain stated goals whose validity the Navy needed to test. In addition, ongoing readiness reports do not provide information on important factors such as costs, long-term maintenance implications, and quality of life issues. The Summer Pulse 2004, Lincoln surge deployment, and Global War on Terrorism Surge 2005 testing events were not part of a methodical test and evaluation approach. Therefore, the Navy is unable to convincingly use these events to evaluate the Fleet Response Plan and determine whether the plan has been successful in increasing readiness or achieving other goals. Moreover, without effective evaluation of the Fleet Response Plan, the Navy may be unable to identify and correct potential problem areas across the fleet. Without a comprehensive long-range plan that establishes methodical and realistic testing of the Fleet Response Plan, the Navy may be unable to validate the Fleet Response Plan operational concept, evaluate its progress and success over time, and ensure that it can effectively meet Navy goals over the long term without any adverse, unintended consequences for maintenance, quality of life, and fleet readiness. The formal Navy repository for lessons learned, the Navy Lessons Learned System, has not been used to disseminate Fleet Response Plan-related lessons learned or to analyze test results to evaluate the progress of the plan and improve implementation. The Navy Lessons Learned System has been designated by the Chief of Naval Operations as the singular Navy program for the collection, validation, and distribution of unit feedback as well as the correction of problems identified and derived from fleet operations, exercises, and miscellaneous events. However, there are no mechanisms or requirements in place to force ships, commands, and numbered fleet staffs to submit all lessons learned to the Navy Lessons Learned System, although such mechanisms exist for the submission of port visit and other reports. For the events that the Navy cites as tests of the Fleet Response Plan, it did not analyze and evaluate the results and produce formal lessons learned to submit to the Navy Lessons Learned System for recordation and analysis. Any evaluation done of the testing events has not been incorporated into the Lessons Learned System, preventing comprehensive analyses of lessons learned and identification of problems and patterns across the fleet that may require a high-level, Navy-wide response. Some ship and carrier strike group staff informed us that they prefer informal means of sharing lessons learned, because they feel the process through which ships and commands have to submit lessons learned for validation and inclusion in the database can be complex and indirect. This may prevent ship and command staffs across the fleet from learning from the experiences of others, but it also prevents the Navy Lessons Learned System from performing comprehensive analyses of the lessons learned and possibly identifying problems and patterns across the fleet that may require a high-level Navy-wide response. In addition, the lessons learned are recorded by mission or exercise (e.g., Operation Majestic Eagle) and not by operational concept (e.g., the Fleet Response Plan), making identification of Fleet Response Plan-specific lessons learned difficult and inconsistent. Over the last 10 years, we have issued several reports related to lessons learned developed by the military. We have found that service guidance does not always require standardized reporting of lessons learned and lessons learned are not being used in training or analyzed to identify trends and performance weaknesses. We emphasized that effective guidance and sharing of lessons learned are key tools used to institutionalize change and facilitate efficient operations. We found that despite the existence of lessons learned programs in the military services and the Joint Staff, units repeat many of the same mistakes during major training exercises and operations. Our current review indicates that the Navy still does not include all significant information in its lessons learned database. Therefore, Navy analysts cannot use the database to perform comprehensive analyses of operational concepts like the Fleet Response Plan to evaluate progress and improve implementation. Officials from the Navy Warfare Development Command stated that the Navy is currently drafting a new Chief of Naval Operations Instruction governing the Navy Lessons Learned System that will address some of these issues. Navy Warfare Development Command officials hope that the new instruction will result in several improvements over the current system. First, they would like to see a dual reporting system, so that lessons learned are simultaneously sent to the Navy Lessons Learned System for preliminary evaluation when they are submitted to the numbered fleets for validation. This would allow Navy Lessons Learned analysts to look at unvarnished data for patterns or issues of interest to the Chief of Naval Operations, without taking away the numbered fleets’ validation processes. In addition, officials would like to establish deadlines for the submission of lessons learned to ensure timeliness. Not only will these changes add value to the data stored in the Navy Lessons Learned System, but they will keep the data flowing while ensuring that data are actually submitted and not lost as they move up the chain of command. According to Navy Lessons Learned officials, other branches of the military already allow operators in the field to submit lessons learned directly to their lessons learned systems, enabling value-added analysis and the timely posting of information. By addressing these issues, the Navy can help ensure that the lessons learned process will become more efficient, be a command priority, and produce actionable results. Two years after implementing a major change in how it expects to operate in the future, the Navy has not taken all of the steps needed to enable the Navy or Congress to assess the effectiveness of the Fleet Response Plan. As the Navy prepares to implement the Fleet Response Plan across the entire naval force, it becomes increasingly important that the Navy effectively manages this organizational transformation so that it can determine if the plan is achieving its goals. The absence of a more comprehensive overarching management plan to implement the Fleet Response Plan has left essential questions about definitions, goals, performance measures, guidance, timelines, milestones, benchmarks, and resources unanswered, even though sound management practices recognize the need for such elements to successfully guide activities and measure outcomes. The absence of these elements could impede effective implementation of the Fleet Response Plan. Furthermore, without a comprehensive plan that links costs with performance measures and outcomes, neither the Navy nor Congress may be able to determine if the Fleet Response Plan is budget neutral. More effective communications throughout the fleet using an overall communications strategy could increase employee awareness of the plan and help ensure successful implementation. The Navy also has not developed a comprehensive long-range plan for testing and evaluating the Fleet Response Plan. Without a well-developed plan and methodical testing, the Navy may not be aware of all of the constraints to successfully surging its forces to crises in a timely manner. Moreover, the absence of an overarching testing and evaluation plan that provides for data collection and analysis may impede the Navy’s ability to use its testing events to determine whether the Fleet Response Plan has been successful in increasing readiness and to identify and correct problem areas across the fleet. Failure to document and record the results of testing and evaluation efforts in the Navy Lessons Learned System could limit the Navy’s ability to validate the value of the concept, identify and correct performance weaknesses and trends across the fleet, perform comprehensive analyses of lessons learned, and disseminate these lessons and analyses throughout the fleet. To facilitate successful implementation of the Fleet Response Plan and enhance readiness and ensure the Navy can determine whether the plan has been successful in increasing readiness and is able to identify and correct performance weaknesses and trends across the fleet, we recommend that the Secretary of Defense take the following two actions: Direct the Secretary of the Navy to develop a comprehensive overarching management plan based on sound management practices that will clearly define goals, measures, guidance, and resources needed for implementation of the Fleet Response Plan, to include the following elements: establishing or revising Fleet Response Plan goals that identify what Fleet Response Plan results are to be expected and milestones for achieving these results, developing implementing guidance and performance measures based on these goals, identifying the costs and resources needed to achieve each performance goal, and communicating this information throughout the Navy. Direct the Secretary of the Navy to develop a comprehensive plan for methodical and realistic testing and evaluation of the Fleet Response Plan. Such a comprehensive plan should include a description of the following elements: how operational tests, exercises, war games, experiments, deployments, and other similar events will be used to show the performance of the new readiness plan under a variety of conditions, including no-notice surges; how data will be collected and analyzed for these events and synthesized to evaluate program success and improvements; and how the Navy Lessons Learned System will collect and synthesize lessons from these events to avoid repeating mistakes and improve future operations. In written comments on a draft of this report, DOD generally concurred with our recommendations and cited actions it will take to implement the recommendations. DOD concurred with our recommendation that the Navy should develop a comprehensive overarching management plan based on sound management practices that would clearly define the goals, measures, guidance, and resources needed for successful implementation of the Fleet Response Plan, including communicating this information throughout the Navy. DOD noted that the Navy has already taken action or has plans in place to act on this recommendation, and described several specific accomplishments and ongoing efforts in this regard. DOD also noted that the Navy intends to communicate through message traffic, white papers, instructions, lectures, and meetings with Navy leadership. We agree that these means of communication are an important part of an effective communication strategy; however, we do not believe that these methods of communication constitute a systemic strategy to ensure communication at all personnel levels. We believe the Navy would benefit from a comprehensive communication strategy that builds on its ongoing efforts, but encompasses additional actions to ensure awareness of the plan throughout the Navy. DOD partially concurred with our recommendation to test and evaluate the Fleet Response Plan. DOD noted that it plans to use a variety of events and war games to evaluate the Fleet Response Plan, but it does not see a need to conduct no-notice surges to test the Fleet Response Plan. DOD stated that it believes no-notice surges are expensive and unnecessary and could lead to penalties on overall readiness and the ability to respond to emergent requirements. DOD also noted that the Navy has surged single carrier strike groups, expeditionary strike groups, and individual ships or units under the Fleet Response Plan, and it cited several examples of such surges. We commend the Navy’s plans to use a variety of events to evaluate the Fleet Response Plan and its use of the Navy Lessons Learned System to report and evaluate the lessons learned in the Global War on Terrorism Surge 2005 exercise held earlier this year. However, we continue to believe that no-notice surges are critical components of realistic testing and evaluation plans and that the benefits of such exercises can outweigh any additional costs associated with conducting such tests on a no-notice basis. Both we and Congress have long recognized the importance of no-notice exercises. For example, in a 1989 report, we noted that DOD was instituting no-notice exercises to assess the preparedness of combatant commands’ state of training of their staffs and components. In addition, in 1990 the Department of Energy conducted no-notice tests of security personnel in response to our work and out of recognition that such tests are the best way to assess a security force’s ability at any given time. Furthermore, in recent years, the Department of Homeland Security, Department of Energy, and others have conducted no-notice exercises because they add realism and demonstrate how well organizations are actually prepared to respond to a given situation. Despite the importance of no-notice exercises, the Navy has not conducted no-notice exercises to test and evaluate the centerpiece surge goal of 6 + 2 for carrier strike groups. We believe that the smaller surges cited by DOD can provide insights into the surging process, but we do not believe that such surges can effectively test the Navy’s readiness for a full 6 + 2 carrier strike group surge. DOD also provided technical and editorial comments, which we have incorporated as appropriate. DOD’s comments are reprinted in appendix II of this report. We are sending copies of this report to other interested congressional committees; the Secretary of Defense; the Secretary of the Navy; and the Director, Office of Management and Budget. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4402 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. To assess the extent to which the Navy has employed a sound management approach in implementing the Fleet Response Plan, we interviewed Navy headquarters and fleet officials; received briefings from relevant officials; and reviewed key program documents. In the absence of a comprehensive planning document, we compared best practices for managing and implementing major efforts to key Navy messages, directives, instructions, and briefings, including, but not limited to, the Culture of Readiness message sent by the Chief of Naval Operations (March 2003); the Fleet Response Concept message sent by the Chief of Naval Operations (May 2003); the Fleet Response Plan Implementation message sent by the Commander, Fleet Forces Command (May 2003); the Fleet Response Plan Implementation Progress message sent by the Commander, Third Fleet (September 2003); and the U.S. Fleet Forces Command’s Fleet Training Strategy instruction (May 2002 and an undated draft). We also conducted meetings with several of the commanding officers, executive officers, and department heads of selected carrier strike groups, aircraft carriers, and air wings to obtain information on how the plan had been communicated, how the plan had changed their maintenance and training processes, the impact on their quality of life, the cost implications of the plan, and other factors. To assess the extent to which the Navy has tested the effectiveness of the Fleet Response Plan and shared results to improve its implementation, we obtained briefings; interviewed Navy headquarters and fleet officials; and reviewed test and evaluation guidance for both the Navy and other federal agencies. To evaluate the three Fleet Response Plan demonstrations identified by the Navy, we interviewed officials from the Fleet Forces Command and the Navy Warfare Development Command, reviewed existing documentation on the demonstrations, queried the Navy Lessons Learned System for lessons learned from the demonstrations, and compared our findings to accepted best practices for tests and evaluations. Further, we reviewed Navy Lessons Learned System instructions and queried the system to determine recorded lessons learned pertaining to the Fleet Response Plan. We validated the Navy Lessons Learned System data and determined the data were sufficiently reliable for our analysis. We conducted our review from January 2005 through August 2005 in accordance with generally accepted government auditing standards at the following locations: The Joint Staff, Washington, D.C. U.S. Pacific Command, Camp H. M. Smith, Hawaii Offices of the Chief of Naval Operations, Washington, D.C. In addition to the contact named above, Richard Payne, Assistant Director; Renee Brown; Jonathan Clark; Nicole Collier; Dawn Godfrey; David Marroni; Bethann Ritter; Roderick Rodgers; John Van Schaik; and Rebecca Shea made significant contributions to this report.
The Navy has been transforming itself to better meet 21st century needs. Since 2000, the Congress has appropriated about $50 billion annually for the Navy to operate and maintain its forces and support around 376,000 military personnel. In recognizing that the Navy faces affordability issues in sustaining readiness within its historical share of the defense budget, the Chief of Naval Operations announced a concept called the Fleet Response Plan to enhance its deployment readiness status. The Fleet Response Plan is designed to more rapidly prepare and sustain readiness in ships and squadrons. GAO evaluated the extent to which the Navy has (1) employed a sound management approach in implementing the Fleet Response Plan and (2) tested and evaluated the effectiveness of the plan and shared results to improve implementation. In establishing the Fleet Response Plan, the Navy has embraced a major change in the way it manages its forces. However, the Navy's management approach in implementing the Fleet Response Plan has not fully incorporated sound management practices needed to guide and assess implementation. These practices include (1) establishing a coherent mission and strategic goals, including resource commitments; (2) setting implementation goals and a timeline; and (3) establishing a communication strategy. While the Navy has taken a number of positive actions to implement the plan, it has not provided readiness goals for units other than carrier strike groups; resource and maintenance goals; performance measures and timelines; or a communications strategy. Sound management practices were not fully developed because senior leaders wanted to quickly implement the plan in response to changes in the security environment. However, without an overall management plan containing all of these elements, it may be difficult for the Navy to determine whether its efforts to improve the fleet's readiness are achieving the desired results, adequately measuring overall progress, or identifying what resources are needed to implement the Fleet Response Plan. The Navy has not fully tested and evaluated the Fleet Response Plan or developed lessons learned to identify the effectiveness of its implementation and success over time. Systematic testing and evaluation of new concepts is an established practice to gain insight into how systems and capabilities will perform in actual operations. However, instead of methodically conducting realistic tests to evaluate the Fleet Response Plan, the Navy has tried to demonstrate the viability of the plan by relying on loosely linked events that were not part of an overall test and evaluation strategy. This approach could impair the Navy's ability to validate the plan and evaluate its success over time. In addition, the Navy has not used its lessons learned system to share the results of its Fleet Response Plan events or as an analytical tool to evaluate the progress of the plan and improve implementation, which limits the Navy's ability to identify and correct weaknesses across the fleet.
The narrow margin of victory in the 2000 presidential election raised concerns about the extent to which members of the military, their dependents, and U.S. citizens living abroad were able to vote via absentee ballot. The elections process within the United States is primarily the responsibility of the individual states and their election jurisdictions. States have considerable discretion in how they organize the elections process and this is reflected in the diversity of processes and deadlines that states have for voter registration and absentee voting, including diversity in the processes and deadlines that apply to military and overseas voters. Even when imposing requirements on the states in the Help America Vote Act of 2002, such as statewide voter registration systems and provisional voting, Congress left states discretion in how to implement those requirements and did not require uniformity. Executive Order 12642, dated June 8, 1988, designated the Secretary of Defense or his designee as responsible for carrying out the federal functions under UOCAVA. UOCAVA requires the presidential designee to (1) compile and distribute information on state absentee voting procedures, (2) design absentee registration and voting materials, (3) work with state and local election officials in carrying out the act, and (4) report to Congress and the President after each presidential election on the effectiveness of the program’s activities, including a statistical analysis on UOCAVA voter participation. DOD Directive 1000.4, dated April 14, 2004, is DOD’s implementing guidance for the federal voting assistance program, and it assigned the Under Secretary of Defense for Personnel and Readiness (USD P&R) the responsibility for administering the program. The FVAP office, under the direction of the USD P&R, manages the program. For 2004, FVAP had a full-time staff of 13 and a fiscal year budget of approximately $6 million. FVAP’s mission is to (1) inform and educate U.S. citizens worldwide of their right to vote, (2) foster voting participation, and (3) protect the integrity of, and enhance, the electoral process at the federal, state, and local levels. DOD Directive 1000.4 also sets forth DOD and service roles and responsibilities in providing voting education and assistance. In accordance with the directive, FVAP relies heavily upon the military services and DOS for distribution of absentee voting materials to individual UOCAVA citizens. According to the DOD directive, each military service is to appoint a senior service voting representative, assisted by a service voting action officer, to oversee the implementation of the service’s voting assistance program. Also, the military services are to designate trained VAOs at every level of command to carry out voting education and assistance responsibilities to servicemembers and their eligible dependents. One VAO on each military installation should be assigned to coordinate voting efforts conducted by VAOs in subordinate units and tenant commands. Where possible, installation VAOs should be of the rank GS-12 civilian or higher, or pay grade O-4 officers or higher. In accordance with the DOD directive, commanders designate persons to serve as VAOs. Serving as a VAO is a collateral duty, to be performed along with the servicemember’s other duties. Similarly, DOS, through its Bureau of Consular Affairs, embassies and consulates, carries out its voter assistance responsibilities by designating VAOs to provide assistance. The Foreign Affairs Manual contains absentee voting guidance for embassy and consulate VAOs, who also provide voting assistance as a collateral duty. FVAP updates the Voting Action Plan—its primary voting guidance to DOD components and other agencies—every 2 years. The Voting Action Plan provides detailed guidance on implementing the federal functions of UOCAVA and DOD Directive 1000.4. It also tasks FVAP, DOD components, and all other participating federal agencies with specific responsibilities and provides a timeline for carrying out their roles. FVAP updated the plan for 2004–05; however, it was never approved by the Secretary of Defense, and it remained in draft form for the 2004 presidential election. FVAP and the services referred to the draft Voting Action Plan in implementing their voting assistance efforts for the 2004 election. To assist voters in the absentee voting process, FVAP also updates its Voting Assistance Guide every 2 years. The guide includes state-by-state instructions and timelines for completing the various voting forms and it also lists addresses for local election offices within each state. For the 2004 presidential election, FVAP expanded its efforts beyond those taken in the 2000 election by providing military personnel and overseas citizens with more tools and information needed to vote by absentee ballot. First, FVAP distributed more voting materials, and improved its Web site to enable greater access for participants. Second, FVAP increased absentee voting training opportunities by providing more workshops and an online training course for the 2004 election. Third, FVAP developed an electronic version of the Federal Write-in Absentee Ballot, which is accepted by all states and U.S. territories. In its 2005 report to the Congress and the President on the effectiveness of its federal voting assistance program, on the basis of its postelection surveys, FVAP attributed higher 2004 voter participation rates to the effective implementation of its voter outreach program. However, because of low survey response rates, GAO has concerns about FVAP’s ability to project changes in voter participation rates between the 2000 and 2004 presidential elections. For the 2000 election, we reported that voting materials, such as the Federal Post Card Application (FPCA)—the registration and absentee ballot request form for UOCAVA citizens—were not always available when needed. We were told by representatives from DOD and DOS that they had enough 2004 election materials for their potential absentee voters. Each service reported meeting the DOD requirement of 100 percent in-hand delivery of FPCAs to each servicemember by January 15. DOS also targeted 100 percent in-hand delivery of FPCAs to citizens employed with the embassies and consulates. According to DOS, FVAP initially provided DOS with the quantity of Voting Assistance Guides requested, however, because of high voter interest, additional copies were needed and obtained from the military services. After the 2000 presidential election, FVAP took steps to make its Web site more accessible to UOCAVA citizens worldwide by changing security parameters surrounding the site. According to FVAP, prior to the 2004 election, its Web site was within the existing DOD “.mil” domain, which includes built-in security firewalls. Some overseas Internet service providers were consequently blocked from accessing this site because hackers were attempting to get into the DOD system. As a result, FVAP moved the site out of the DOD “.mil” domain to a less secure domain. In September 2004, FVAP issued a news release announcing this change and provided a list of Web site addresses that would allow access to the site. Nonetheless, representatives of overseas citizens’ organizations continued to report that some citizens were not able to access the site. FVAP acknowledged that the site was not accessible at times prior to the 2004 election, but said that this problem was limited to relatively small geographic areas and occurred because some networks employed independent protection mechanisms that prevented communication with FVAP’s system. Representatives from overseas citizens groups acknowledged that obtaining access to FVAP’s Web site was sometimes difficult, but this was caused by the Internet service provider and not by FVAP. They stated that they were able to get to FVAP’s Web site through other Web sites, such as Democrats and Republicans Abroad. FVAP also added more election-related links to its Web site to assist UOCAVA citizens in the voting process. The Web site (which FVAP considers one of its primary vehicles for disseminating voting information and materials) provides downloadable voting forms and links to all of FVAP’s informational materials, such as the Voting Assistance Guide, Web sites of federal elected officials, state election sites, and U.S. overseas citizens’ organizations. It also contains contact information for FVAP and the military departments’ voting assistance programs. The representatives from overseas citizens’ organizations felt that FVAP’s Web site provided useful and valuable information concerning absentee voting. Although FVAP provided more resources to UOCAVA citizens concerning absentee voting, it is ultimately the responsibility of the voter to be aware of and understand these resources, and to take the actions needed to participate in the absentee voting process. For the 2004 election, FVAP increased the number of VAO training workshops it conducted to 164. The workshops were conducted at U.S. embassies and military installations around the world, including installations where units were preparing to deploy. In contrast, only 62 training workshops were conducted for the 2000 election. FVAP conducts workshops during years of federal elections to train military and civilian VAOs in providing voting assistance. In March 2004, FVAP added an online training course to its Web site as an alternative to its in-person voting workshops. Military VAOs can take the military version and DOS civilian VAOs can take the civilian version of the online course, and both are available on CD-ROM. According to FVAP, completion of the workshop or the online course meets a DOD requirement that VAOs receive training every 2 years. Installation VAOs are responsible for monitoring completion of training. The training gives VAOs instructions for completing voting forms, discusses their responsibilities, and informs them about the resources available to conduct a successful voting assistance program. On October 21, 2004, just a few weeks prior to the election, FVAP issued a news release announcing an online version of the Federal Write-in Absentee Ballot, an emergency ballot accepted by all states and territories. UOCAVA citizens who do not receive their requested state absentee ballots in time to meet state deadlines for receipt of voted ballots can use the Federal Write-in Absentee Ballot. The national defense authorization act for fiscal year 2005 amended the eligibility criteria for using the Federal Write-in Absentee Ballot. Prior to the change, a UOCAVA citizen had to be outside of the United States, have applied for a regular absentee ballot early enough to meet state election deadlines, and not have received the requested absentee ballot from the state. Under the new criteria, the Federal Write-in Absentee Ballot can also be used by military servicemembers stationed in the United States, as well as overseas. However, overseas civilian citizens cannot mail the Federal Write-in Absentee Ballot from within the United States. On the basis of its 2004 postelection surveys, FVAP reported higher voter participation rates among UOCAVA citizens in its quadrennial report to the Congress and the President on the effectiveness of its 2004 voting assistance efforts. The report included a statistical analysis of voter participation and discussed experiences of uniformed servicemembers, federal civilians overseas, nonfederally employed overseas citizens, unit and DOS VAOs, and local election officials during the election, as well as a description of state-federal cooperation in carrying out the requirements of UOCAVA. However, the low survey response rates raise concerns about FVAP’s ability to project increased voter participation rates among all categories of UOCAVA citizens. We reported in 2001 that some absentee ballots became disqualified for various reasons, including improperly completed ballot return envelopes, failure to provide a signature, or lack of a valid residential address in the local jurisdiction. We recommended that FVAP develop a methodology, in conjunction with state and local election jurisdictions, to gather nationally projectable data on disqualified military and overseas absentee ballots and reasons for their disqualification. In anticipation of gathering nationally projectable data, prior to the election, FVAP randomly selected approximately 1,000 local election officials to receive an advance copy of the postelection survey so they would know what information to collect during the election to complete the survey. The survey solicited a variety of information concerning the election process and absentee voting, such as the number of ballots issued, received, and counted, as well as reasons for ballot disqualification. In FVAP’s 2005 report, it cited the top two reasons for disqualification as ballots were received too late or were returned as undeliverable. FVAP also developed a survey for federal civilians overseas, nonfederally employed overseas citizens, military servicemembers, and VAOs for military units and DOS, which it sent after the election to elicit voting experiences with the absentee voting process. Table 1 displays FVAP’s sample size and response rates for the various survey groups. FVAP reported higher participation rates for all groups in the 2004 presidential election as compared with those reported for the 2000 election. FVAP attributed the higher voting participation rates to an effective voter information and education program that included command support and agency emphasis. State progress in simplifying absentee voting procedures and increased interest in the election were also cited as reasons for increased voting participation. However, low survey response rates raise concerns about FVAP’s ability to project participation rate changes among UOCAVA citizens. While, according to FVAP, the 2004 postelection surveys were designed to provide national estimates, most of the surveys experienced low response rates. Although FVAP did not include the sample sizes and response rates in its report, five of the six groups surveyed had response rates that ranged from 16 to 52 percent; the remaining and smallest group surveyed achieved an 87 percent response rate. FVAP did not perform any analysis comparing those who responded to the surveys with those who did not respond. Such an analysis would allow researchers to determine if those who responded to the surveys are different in some way from those who did not respond. If it is determined that there is a difference between those who responded and those who did not, then the results cannot be generalized across the entire population of potential survey participants. In addition, FVAP did no analysis to account for sampling error. Sampling error occurs when a survey is sent to a sample of a population rather than to the entire population. While techniques exist to measure sampling error, FVAP did not use these techniques in their report. The practical difficulties in conducting surveys of this type may introduce other types of errors as well, commonly known as nonsampling errors. For example, errors can be introduced if (1) respondents have difficulty interpreting a particular question, (2) respondents have access to different information when answering a question, or (3) those entering raw survey data make keypunching errors. FVAP also faced specific challenges in administering surveys to overseas citizens who voted absentee. In surveying overseas citizens, only a select number of embassies were chosen by DOS to administer the survey to overseas citizens. Because of confidentiality restrictions, FVAP was unable to obtain a list of federal civilians and nonfederally employed civilians living overseas, and had to rely on the embassies to select the people who received the surveys. Only citizens who had previously registered with the embassy had a chance to participate in the survey. U.S. citizens who lived overseas and were not registered with the embassy had no chance of being selected. The absence of a listing of all civilians overseas certainly contributes to the possibility of error associated with using a sample of the population. The response rate for nonfederal civilians was the lowest among all groups surveyed. As such, the views and voting experiences of the survey participants may not reflect those of and are not generalizable to all overseas citizens. As a result of known weaknesses in FVAP’s reporting methodology, its estimates and conclusions should be interpreted with caution. In 2001, we reported that implementation of the federal voting assistance program by DOD and DOS was uneven due to incomplete service guidance, lack of oversight, and insufficient command support. Prior to the 2004 presidential election, DOD and DOS implemented corrective actions that addressed our recommendations. However, the level of assistance continued to vary at the installations we visited and throughout the overseas civilian community. Because the VAO role is a collateral duty and VAOs’ understanding and interest in the voting process differ, some variance in voting assistance may always exist. DOD and DOS plan to continue their efforts to improve absentee voting assistance. In 2001, we reported that the services had not incorporated all of the key requirements of DOD Directive 1000.4 into their own voting policies, and that DOD exercised very little oversight of the military’s voting assistance programs. The report also stated that the oversight of DOS’s voting assistance program could be improved. These factors contributed to some installations not providing effective voting assistance. We recommended that the Secretary of Defense direct the services to revise their voting guidance to be in compliance with DOD’s voting requirements, and provide for more voting program oversight through inspector general reviews and a lessons-learned program. Subsequent to DOD’s revision of Directive 1000.4, the services revised their guidance to reflect DOD’s voting requirements. In the 2002–03 Voting Action Plan, FVAP implemented a best practices program to support the development and sharing of best practices used among VAOs in operating voting assistance programs. FVAP included guidance on its Web site and in its Voting Assistance Guide on how VAOs could identify and submit a best practice. Identified best practices for all the services are published on the FVAP Web site and in the Voting Information News—FVAP’s monthly newsletter to VAOs. We also recommended that the Secretary of State direct the Assistant Secretary of State for Consular Affairs to take a more active role in overseeing the voting assistance program by establishing processes for improving oversight and consistency across embassies and consulates, including reminding posts more frequently to use the Foreign Affairs Manual and related guidance for ordering supplies and to use the military postal system and the diplomatic pouch, and initiatives to improve outreach, including identifying best practices in a forum accessible to embassies and consulates, such as the Consular Affairs Web site. In responding to these recommendations, DOS began maintaining a global listing of all of its VAOs and voting assistants and provided instructions to posts on administering their voting assistance programs. DOS revised chapter 7, which covers voting assistance, in its Foreign Affairs Manual and posted the manual, its 2004–05 Voting Action Plan, and other guidance on its intranet Web site for access by all its embassies and consulates. Although the revised version of this chapter was in draft form during the 2004 election and awaiting approval by the various DOS directorates, it was put on the DOS Web site in early 2004 for use by the embassies and consulates. The draft was approved in January 2006. Representatives at the embassies and consulates also conducted numerous outreach efforts through warden messages, embassy Web sites, and town hall meetings. The department’s Chief Voting Officer maintained contact with the various embassy VAOs and voting assistants throughout the year, providing information on absentee voting procedures, voter education and outreach campaigns, and various registration and voting deadlines. The DOS Chief Voting Officer also received periodic updates on the status of the embassies’ voting assistance efforts. While DOS did not develop a formal lessons-learned program, the Chief Voting Officer said that he solicited ideas and best practices from each of the embassies and consulates. These practices were incorporated into instructions for the 2004 election that were distributed throughout the organization via its Web site and e-mail traffic. For the 2004 election, emphasis on voting education and awareness increased throughout the top levels of command within DOD and DOS. In 2001, we reported that lack of DOD command support contributed to the mixed success of the services’ voting programs and recommended that the Senior Service Voting Representatives monitor and periodically report to FVAP on the level of installation command support. To ensure command awareness and involvement in implementing the voting assistance program, in late 2003 the USD P&R began holding monthly meetings with FVAP and the Senior Service Voting Representatives and discussed the status of service voting assistance programs. In 2001, we also reported that some installations and units did not appoint VAOs as required by DOD Directive 1000.4. In March 2004, the Secretary of Defense and Deputy Secretary of Defense issued memorandums to the Secretaries of the military departments, the Chairman of the Joint Chiefs of Staff, and Commanders of the Combatant Commands, directing them to support voting at all levels of command. These memorandums were issued to ensure that voting materials were made available to all units and that VAOs were assigned and available to assist voters. Also, the Chairman of the Joint Chiefs of Staff recorded a DOD-wide message regarding the opportunity to vote and ways in which VAOs could provide assistance. This message was used by FVAP in its training presentations and was distributed to military installations worldwide. During our review, we found that each service reported to DOD that it assigned VAOs at all levels of command. Voting representatives from each service utilized a variety of servicewide communications to disseminate voting information and stressed the importance of voting. For example, the Marine Corps produced a videotaped interview stressing the importance of voting that was distributed throughout the Marine Corps. The Army included absentee voting information in a pop-up message that was included on every soldier’s e-mail account. In each service, the Voting Action Officer sent periodic messages to unit VAOs, reminding them of key voting dates and areas to focus on as the election drew closer. Throughout the organizational structure, these VAOs contacted servicemembers through servicewide e-mail messages, which contained information on how to get voting assistance and reminders of voting deadlines. According to service voting representatives, some components put together media campaigns that included reminders in base newspapers, billboards, and radio and closed circuit television programs. They also displayed posters in areas frequented by servicemembers (such as exchanges, fitness centers, commissaries, and food court areas). DOS’s top-level leadership also increased its emphasis on absentee voting for the 2004 election. The department’s Senior Voting Representative provided an article in the September 2003 issue of FVAP’s Voting Information News, which was available on FVAP’s Web site. This article reminded overseas voters of the upcoming presidential primary election and the time frame for registering and requesting absentee ballots. It also reminded all involved that starting early in the process was key to a successful program. Identifying and training volunteers from the civilian American community were also emphasized as ways to multiply the effectiveness of the VAO. Also discussed was the availability of the embassy community and its resources, meetings with local communities, and using local media to get the word out on absentee voting. Throughout the year, the Chief Voting Officer sent messages to the posts concerning the absentee voting process and various deadlines. DOS also used its embassies and consulates, various private organizations, and the local media to disseminate FVAP voting materials and information. These organizations conducted various outreach efforts, including holding town hall meetings, sending messages from the VAO to overseas citizens concerning absentee voting, and holding voter registration drives. As the election deadline approached, the department intensified its efforts to assist overseas citizens in voting absentee. For example, in early October 2004, a consular general placed hundreds of Federal Write-in Absentee Ballots on a supply plane headed to Antarctica and sent an e-mail message to overseas citizens there, urging them to drop off completed ballots or fill out emergency ballots while the plane was on the ground in that country. In late October 2004, one consulate sent an e-mail containing last-minute voting information to all Americans in the district and attempted to telephone those who could not be reached by e-mail. DOS encouraged all of its VAOs and voting assistants to set a goal of 100 percent in-hand delivery of FPCAs to the official American community by approximately June 30, 2004. It defined this community as the U.S. citizens employed at the embassies, consulates, or other U.S. missions in the various countries for whom they had appropriate contact information. In addition to this goal, the Chief Voting Officer also suggested that officers transferring to a post should receive FPCAs as part of their post welcome kit or shortly after their arrival at a post. DOS also worked with courier services to obtain discounted or free delivery of requests for ballots and voted ballots. While the arrangements varied by country, generally the courier would allow overseas citizens, with proper identification, to ship ballot materials to their local election offices at reduced or no cost. The voter was required to go to a shipping office of the courier and complete the shipping paperwork, and the package would be mailed. The services and DOS revised their voting guidance, increased top-level support, and improved program oversight. However, voting assistance to servicemembers and overseas citizens continued to vary. Based on our analysis of information from our focus groups, we determined that the voting assistance that servicemembers received varied from unit to unit for several reasons, including (1) the fact that the VAO role is a collateral duty, (2) varying individual VAO understanding and interest in the voting process, (3) differing levels of VAO training, and (4) the command’s mobilization status. Also, in discussions with DOS’s Chief Voting Officer, we were told that the level of DOS voting assistance varied according to the level of development in the country, the security climate, and the quality of the host country’s infrastructure. The variation in voting assistance provided by DOD and DOS may have caused some potential voters to be unaware of relevant voting tools. Given these factors, some variance in absentee voting assistance may always exist; however, DOD and DOS plan to continue efforts to improve the process. VAOs play a crucial role in informing citizens of the availability and usefulness of FVAP’s resources. Providing voting assistance is a collateral duty; those appointed are faced with time constraints in providing voting assistance to military servicemembers and overseas citizens, and are expected to fulfill these duties in addition to their primary duties as warfighters and mission support staff. Furthermore, military personnel rotate to new assignments periodically, creating turnover in the voting assistance program. VAOs at each installation we visited commented that it was difficult to be effective because of the normal but competing mission requirements they had to fulfill while simultaneously performing their VAO responsibilities. For example, VAOs at two installations said their workload increased because of additional tasks that included responding to voting-related requirements from the head of the service, answering surveys on whether servicemembers were being educated on voting, and completing numerous reports on contacts with servicemembers. The level of understanding and interest shown by some VAOs toward their duties may have also affected the voting assistance they provided. At one installation we visited, VAOs said they were directed by their commanding officer to serve as VAOs, while at two other installations we visited, some VAOs said they had volunteered for the role. VAOs who volunteered appeared to be more interested and took the initiative to learn more about voting than some of the VAOs who were appointed. At one installation we visited, disinterest in being a VAO was evident in VAOs who thought it was the responsibility of the voter to get the necessary information to vote via absentee ballot. While the VAOs we spoke with were generally knowledgeable about DOD’s voting requirements, we found that the extent to which they were trained to provide voting assistance varied, as we reported in September 2001. At four of the installations we visited, none of the VAOs we met with had attended an FVAP workshop and VAOs at one of these installations said they had not received any training. A Voting Action Officer from one service stated that travel to a workshop location was a problem because there was no specific funding for VAO training. At one installation, VAOs cited time constraints and high turnover as reasons for not being trained to provide voting assistance. VAOs from another installation suggested that voting training should be shortened to include only the key items VAOs need to know to provide assistance, such as instructions for completing the FPCA. At one other installation, many VAOs had attended an FVAP workshop and others had taken the online training. A VAO unable to attend a workshop is allowed by DOD Directive 1000.4 to take the online training course to meet the requirement for VAO training. Our review of FVAP’s online course showed that it provided an overview of VAO roles and responsibilities, included a section on using the Federal Write-in Absentee Ballot, and cited several other resources available for absentee voting assistance, such as the Voting Assistance Guide, FVAP’s Web site, and the Voting Information News—resources that we found to be helpful in providing voting assistance. For example, the Voting Assistance Guide has a chapter titled Instructions for Voting Assistance Officers, which provides instructions on 23 areas related to absentee voting. The extent of training had an effect on the level of voting assistance provided to potential voters in some locations. For example, we found one installation VAO who was not aware of the online Federal Write-in Absentee Ballot or the revised criteria for its use, and therefore was unable to assist other VAOs and servicemembers in using the online form. However, a VAO at another installation said he was aware of the ability to use this ballot, and his unit used as many as 125 during the 2004 presidential election. At one installation, some VAOs said the online training was more useful than the workshop but at another installation some VAOs did not find the online training very helpful, commenting that it was difficult to find on FVAP’s Web site, was not user-friendly, or took too much time to complete. At another installation, VAOs commented that training workshops tailored to specific installations would be beneficial and would cause more VAOs to attend. For example, this training could include specific tasks related to new recruits at a training installation. Additionally, VAOs commented that training is good only for a limited time. By the time a presidential election occurs, much of the training they received earlier in the year is forgotten. The command’s mobilization status also affected the level of voting assistance provided by VAOs. Specifically, one location we visited had many ground units deployed or preparing to deploy during the 2004 election and absentee voting was not a priority. Officials stated that voting was mentioned but was not a top priority when compared with other deployment issues, such as preparing powers-of-attorney and wills and concentrating on troop movements while in theater. Conversely, we were told by ship-based servicemembers that they had no reason to be unaware of absentee voting, given the enclosed boundaries of their ship, even while deployed. During our review, a few servicemembers who were deployed during the election told us that voting was mentioned at their deployed location but there were other things going on that took priority. According to the DOS Chief Voting Officer, the level of voter assistance for overseas citizens also varied according to the level of development in the country, the security climate, and the quality of the host country’s infrastructure. For example, the reliability of the mail system, working telephones, passable road networks, and even the existence of electric power grids play important roles, and require VAOs to use different means in different places to help citizens register and vote. Also, in industrial locations within a country, e-mail and warden messages could be an effective primary means of communication, whereas in rural locations within the same country, the means of communication might be a person on foot taking information to an American citizen. According to the department’s Senior Voting Representative, most embassies, consulates, and U.S. news organizations reported extraordinary increases in the number of Americans abroad who registered and planned to vote in the 2004 general election. Contributing factors to this increase appear to be greatly expanded voter education and outreach, the closeness of the vote in the 2000 election, and reaction to world events over the past 4 years. Despite the outreach effort of DOS for the 2004 election, representatives of some overseas citizens’ groups we spoke with believed there was still a lack of adequate DOS outreach to overseas citizens, especially in comparison with the outreach they believe was provided to military servicemembers. DOS reported that it received relatively few complaints from Americans abroad and that most complaints were from infrequent or first-time voters confused by the absentee voting process. Some voters complained that they failed to receive a ballot from their local election officials, and a few claimed they experienced difficulties when attempting to contact embassies or consulates by phone. DOS reported that it acted quickly to address each of these concerns. Despite the efforts of FVAP, DOD, and DOS, we identified three challenges that remain in providing voting assistance to military personnel and overseas citizens, which are: simplifying and standardizing the time-consuming and multistep absentee voting process, which includes different requirements and time frames for each state; developing and implementing a secure electronic registration and voting system; and proactively reaching all overseas citizens. The simplest and most truthful answer is that it all depends. Does the voter want to participate in Presidential primary elections, state primary elections, run-off elections, special elections and the November general election? To answer that question, you’ll need to ask several questions. (1) What is the voter’s state of voting residence? (2) Is the voter already or still registered to vote? (3) Does the voter’s state send out absentee ballots early or late? and (4) Are remoteness or poor mail service considerations for the voter? Answering these questions is also a challenge for voters, given that each state has its own deadlines for receipt of FPCAs, and the deadline is different depending on whether or not the voter is already registered. For example, according to the Voting Assistance Guide, Montana requires a voter that has not previously registered to submit an FPCA at least 30 days prior to the election. A voter who is already registered must ensure that the FPCA is received by the County Election Administrator by noon on the day before the election. For Idaho voters, the FPCA must be postmarked by the 25th day before the election, if they are not currently registered. If they are registered, the County Clerk must receive the FPCA by 5:00 p.m. on the 6th day before the election. For Virginia uniformed services voters, the FPCA must arrive not later than 5 days before the election, whether already registered or not. However, overseas citizens that are not already registered must submit an FPCA to the General Registrar not later than 29 days before the election. Those overseas voters who are already registered must ensure that the FPCA arrives to the General Registrar not later than 5 days before the election. Using different deadlines for newly registered and previously registered voters to return their absentee ballots may have some administrative logic and basis. For example, verifying the eligibility of a newly registered voter may take longer than that of previously registered voters, and if there is some question about the registration information provided, the early deadlines provide some time to contact the voter and get it corrected. DOD encourages potential voters to complete and mail the FPCA early, in order to receive absentee ballots for all upcoming Federal elections during the year. Military and international mail and the U.S. postal service are the primary means for transmitting voting materials, according to servicemembers with whom we spoke. A challenge for military service members in completing the FPCA is to know where they will be located when the ballots are mailed by the local election official. If the voter changes locations after submitting the FPCA and does not notify the local election official, the ballot will be sent to the address on the FPCA and not the voter’s new location. This can be further complicated by a 2002 amendment to UOCAVA, which allowed military personnel and overseas citizens to apply for absentee ballots for two federal elections. If servicemembers request ballots for the next two federal elections, they must project up to a 4-year period where they will be located when the ballots are mailed. DOD recommended that military servicemembers and overseas citizens complete an FPCA annually in order to maintain registration and to receive ballots for upcoming elections. After a valid FPCA has been received by the local election official, the next step for the voter is to receive the absentee ballot. The determination of when the state mails its ballots sometimes depends on when the state holds its primary elections. FVAP has an initiative encouraging a 40–45-day transit time for mailing and returning absentee ballots; however, 14 states have yet to adopt this initiative. During our focus group discussions, some servicemembers commented that they either did not receive their absentee ballot or they received it so late that they did not believe they had sufficient time to complete and return it in time to be counted. After the voter completes the ballot, the voted ballot must be returned to the local election official within time frames established by each state. As we reported in 2004, deployed military servicemembers face numerous problems with mail delivery, such as military postal personnel who were inadequately trained and initially scarce because of late deployments, as well as inadequate postal facilities, material-handling equipment, and transportation assets to handle mail surge. In December 2004, DOD reported that it had taken actions to arrange for transmission of absentee ballot materials by Express Mail through the Military Postal Service Agency and the U.S. Postal Service. However, during our focus group discussions, servicemembers cited problems with the mail, such as it being a low priority when a unit is moving from one location to another; susceptibility of mail shipments to attack while in theater; and the absence of daily mail service on some military ships. For example, some servicemembers said that mail sat on the ships for as long as a week, waiting for pick up. Others stated that in the desert, mail trucks are sometimes destroyed during enemy attacks. The DOS Chief Voting Officer characterized some overseas mail systems as not functioning. To compensate for some of the mail delivery challenges, DOS negotiated with international courier companies to establish reduced rates and expedited service for voting materials from overseas citizens. In attempting to simplify and standardize the absentee voting process, FVAP continued working with the states, through its Legislative Initiatives program, to facilitate the absentee voting process for military servicemembers and overseas citizens. However, the majority of states have not agreed to any new initiatives since FVAP’s 2001 report to Congress and the President on the effectiveness of its efforts during the 2000 election. The Legislative Initiatives program is designed to make it easier for military servicemembers and overseas citizens to vote by absentee ballot. FVAP is limited in its ability to affect state voting procedures because it lacks the authority to require states to take action on absentee voting initiatives. In the 1980s, FVAP began its Legislative Initiatives program with 11 initiatives, and as of December 2005 it had not added any others. Two of the 11 initiatives—(1) accept one FPCA as an absentee ballot request for all elections during the calendar year and (2) removal of the not-earlier-than restrictions for registration and absentee ballot requests—were made mandatory for all states by the National Defense Authorization Act for Fiscal Year 2002 and the Help America Vote Act of 2002, respectively. According to FVAP, this action was the result of state election officials working with congressional lawmakers to improve the absentee voting process. Between FVAP’s 2001 and 2005 reports to Congress and the President, the majority of the states had not agreed to any of the remaining nine initiatives. Since FVAP’s 2001 report, 21 states agreed to one or more of the nine legislative initiatives, totaling 28 agreements. Table 2 shows the number of agreements with the initiatives since the 2001 report. According to FVAP records, one state withdrew its support for the 40–45-day ballot transit time initiative, and another state withdrew support for enfranchising citizens who had never resided in the United States. Initiatives with the most state support were (1) the removal of the notary requirement on election materials and (2) allowing the use of electronic transmission of election materials. We also found a disparity in the number of initiatives that states have adopted. For example, Iowa is the only state to have adopted all nine initiatives, while Vermont, American Samoa, and Guam have adopted only one initiative each. Despite some progress by FVAP in streamlining the absentee voting process, absentee voting requirements and deadlines continue to vary from state to state. While it is ultimately the responsibility of the voter to understand and comply with these deadlines, varying state requirements can cause confusion among voters and VAOs about deadlines and procedures for registering and voting by absentee ballot. However, the election process within the United States is primarily the responsibility of the individual states and their election jurisdictions. Developing and implementing an electronic registration and voting system, which would likely improve the timely delivery of ballots and increase voter participation, has proven to be a challenging task for FVAP. Eighty- seven percent of servicemembers who responded to our focus group survey said they were likely to vote over the Internet if security was guaranteed. However, FVAP has not been able to develop a system that would protect the security and privacy of absentee ballots cast over the Internet. For example, during the 2000 presidential election, FVAP conducted a small proof of concept Internet voting project that enabled only 84 voters to vote over the Internet. While the project demonstrated that it was possible for a limited number of voters to cast ballots online, FVAP’s project assessment concluded that security concerns needed to be addressed before expanding remote (i.e., Internet) voting to a larger population. In 2001, we also reported that remote Internet-based registration and voting are unlikely to be implemented on a large scale in the near future because of security risks with such a system. The real barrier to success is not a lack of vision, skill, resources, or dedication, it is the fact that, given the current Internet and PC security technology, and the goal of a secure, all-electronic remote voting system, the FVAP has taken on an essentially impossible task. According to FVAP, the full peer review group did not issue a final report. Also, because DOD did not want to call into question the integrity of votes that would have been cast via SERVE, they decided to shut it down prior to its use by any absentee voters. FVAP could not provide details on what it received for the approximately $26 million that it invested in SERVE. FVAP officials stated that they received some services from the contractor, but no hardware or other equipment. In September 2004, DOD implemented the Interim Voting Assistance System (IVAS), an electronic ballot delivery system, as an alternative to the traditional mail process. Although IVAS was meant to streamline the voting process, its strict eligibility requirements prevented it from being utilized by many military or civilian voters. IVAS was open to active duty military members, their dependents, and DOD overseas personnel who were registered to vote. These citizens also had to be enrolled in the Defense Enrollment Eligibility Reporting System (DEERS), and had to come from a state and county participating in the project. FVAP officials said the system was limited to DOD members because their identities could be verified more easily than those of nonmilitary overseas citizens. Voters would obtain their ballots through IVAS by logging onto www.MyBallot.mil and requesting a ballot from their participating local election jurisdiction. One hundred and eight counties in eight states and one territory agreed to participate in IVAS; however, only 17 citizens downloaded their ballots from the site during the 2004 election. Despite low usage of the electronic initiatives and existing security concerns, we found that servicemembers and VAOs at the installations we visited strongly supported some form of electronic transmission of voting materials. During our focus group discussions, servicemembers stated that election materials for the 2004 presidential election were most often sent and received through the U.S. postal system. Servicemembers also commented that the implementation of a secure electronic registration and voting system could increase voter participation and possibly improve confidence among voters that their votes were received and counted. Additionally, servicemembers said that an electronic registration and voting system would improve the absentee voting process by providing an alternative to the mail process, particularly for those servicemembers deployed on a ship or in remote locations. However, at one location, some servicemembers were more comfortable with the paper ballot system and said that an electronic voting system would not work because its security could never be guaranteed. Although DOS set a goal of 100 percent in-hand delivery of an FPCA to overseas citizens employed with an embassy or consulate, it does not have the ability to reach every overseas citizen. While DOS’s Web site is available for overseas citizens to access, DOS does not have the ability to proactively reach the estimated 2 million overseas United States citizens of voting age. According to DOS, about 67 percent of overseas citizens live in about 10 countries, and the remaining 1.2 million overseas citizens are spread throughout the world. If these citizens do not contact the embassy or consulate and provide DOS with appropriate contact information, DOS cannot proactively reach them. DOS has assigned a VAO and voting assistant at each of its approximately 240 embassies and consulates. According to the DOS Chief Voting Officer, it is impossible to know where all eligible overseas voters are located or to directly provide them information on absentee voting. Also, he stated that some overseas citizens could be located hundreds of miles from the embassy. Even for those citizens within proximity to the embassy, the heightened security environment could preclude easy embassy access to obtain voting information. DOS emphasized that it cannot and should not force people to vote, but it should get the forms and information to them as early as possible. In written comments on a draft of this report, DOD generally agreed with our description of their voting assistance efforts. DOD expressed concerns that our information from the focus group discussions may be presented in a way that can be misinterpreted. In our report, we acknowledged that our focus group responses could not be projected across the military community because participants were not selected using a statistically valid sampling methodology. DOD also stated that Congress instructed the department to pursue an electronic absentee voting project upon the release of guidelines for electronic voting from the Election Assistance Commission and the National Institute of Standards and Technology. As required by the national defense authorization act for fiscal year 2005, DOD may delay the implementation of another electronic voting project until the new electronic absentee voting guidelines are issued by the Election Assistance Commission. At the time of our review, the Executive Director of the Commission informed us that the Commission was waiting for the report from FVAP on its internet voting project prior to establishing the guidelines. DOD’s written comments are reprinted in their entirety in appendix III. In written comments on a draft of this report, DOS also generally agreed with our report and provided a few clarifying comments which we incorporated into our final report as appropriate. First, DOS wanted us to quantify the approximate voting age population of overseas citizens at about 2 million. Next, DOS stated the challenge to reaching overseas citizens relates to citizens having no obligation to contact the embassies or consulates versus the geographic dispersion of overseas citizens. If citizens do not contact the embassy or consulate and provide DOS with appropriate contact information, DOS cannot proactively reach them. DOS’s description of the challenge further supports our statements that they cannot reach all overseas citizens. Finally, DOS said that variance in voting assistance was not a result of the size and location of the embassy but related to other issues such as (1) the level of development of the country, (2) the security climate, and (3) the quality of the host country’s infrastructure. They stated that the reliability of the mail system, working telephones, passable road networks, and even the existence of electric power grids play far more important roles, and require the VAOs to use different means in different places to help citizens register and vote. DOS’s written comments are printed in their entirety in appendix IV. We are sending copies of this report to the Secretary of Defense; the Secretaries of the Army, the Navy, and the Air Force; the Commandant of the Marine Corps; the Secretary of State; and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions on this report, please contact me at (202) 512-5559 or [email protected] or George F. Poindexter at (202) 512- 7213 or [email protected]. GAO staff who made major contributions to this report are listed in appendix V. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. To address our overall objectives, we reviewed relevant reports prepared by GAO, FVAP, DOD, the Inspectors General of each service and DOD, the Election Assistance Commission, and private nonprofit organizations that represent military and overseas citizens who participate in the election process via absentee voting. Specifically, to determine differences in FVAP’s efforts between the 2000 and 2004 presidential elections, we reviewed our 2001 report to obtain an assessment of FVAP’s efforts for the 2000 election and compared that assessment with actions taken by FVAP for the 2004 election. We reviewed Section 1973ff. et seq. of Title 42, United States Code, Uniformed and Overseas Citizens Absentee Voting Act to identify specific federal responsibilities for absentee voting and compared these responsibilities with actions taken by the responsible parties. We also reviewed relevant FVAP, DOD, and DOS regulations, operating procedures, and reports to determine how UOCAVA requirements had been incorporated. This included reviewing DOD Directive 1000.4, Federal Voting Assistance Program; Air Force Instruction 36-3107, Voting Assistance Program; Army Regulation 608-20, Army Voting Assistance Program; Operations Navy Instruction 1742.1A, Navy Voting Assistance Program; Marine Corps Order 1742.1A, Voter Registration Program; and DOS’s Foreign Affairs Manual, 7 FAM 1500, Overseas Voting Program; which list the specific responsibilities of each of the respective organizations for implementing the provisions of UOCAVA. We discussed these requirements with representatives from each organization to determine actions they took in implementing them. We met with a commissioner of the Election Assistance Commission and Voting Action Officers for each of the military services and the DOS’s Chief Voting Officer to obtain their opinions on efforts taken for the 2004 election. We also examined projects and special initiatives undertaken by these organizations to address the absentee voting process. We also reviewed FVAP’s Voting Assistance Guide and its Web site to document the type of information provided to UOCAVA citizens for participating in the absentee voting process. Also in determining FVAP’s efforts for the 2004 election, we met with the Deputy Director of FVAP and discussed actions they took to facilitate absentee voting for UOCAVA citizens. We also reviewed FVAP’s 2005 report to Congress and the President and assessed its methodology for conducting its survey of voter participation among military and overseas citizens for the 2004 presidential election. To identify actions taken in response to prior GAO recommendations to reduce variance in program implementation, we reviewed prior GAO reports on absentee voting. We held discussions with officials from DOD and DOS to identify actions they took in responding to these recommendations. We reviewed updated DOD and military service voting assistance policies and guidance and determined whether requirements included in DOD’s overarching guidance had been included in the services’ guidance. We reviewed DOS’s guidance to see whether it included requirements for increased program oversight and outreach to overseas citizens. In addition, we reviewed voting messages sent to embassies/consulates from DOS’s Chief Voting Officer to identify actions taken to assist absentee voters. We also held discussions with VAOs from the military services to discuss their voting assistance efforts and to identify variance in program implementation. We also visited the Marine Corps Recruit Depot, Parris Island, South Carolina, to discuss actions taken at the service level to provide absentee voting training to new recruits. We held discussions with VAOs concerning whether and how they provided absentee voting training during recruit training and we reviewed the training syllabus to identify training related to absentee voting. To identify challenges that remain in providing voting assistance to military personnel and overseas citizens, we met with leaders of organizations representing members of the military and American citizens living overseas to obtain their opinions on assistance efforts provided by FVAP, DOD, and DOS for the 2004 presidential election. These organizations included the National Defense Committee, the Federation of American Women’s Clubs Overseas, the Association of Americans Resident Overseas, and the Overseas Vote Foundation. We also reviewed reports produced by these organizations to gain insights on absentee voting assistance for the 2004 election and to identify remaining challenges. To obtain servicemembers’ opinions on assistance received for the 2004 election and to identify challenges to absentee voting, we conducted 19 focus group discussions, which included 173 participants consisting of enlisted servicemembers and officers from each service. In an attempt to provide an open discussion environment for participants, the groups were ranked according to grade; enlisted 1–4, enlisted 5–9, and officers. In selecting the installations to conduct the focus group discussions, we identified the top nine states that had the largest number of military servicemembers. From this list, we judgmentally selected one installation for each service, except for the Air Force in which we selected two installations. One Air Force location was selected as our test site and we used the results in our totals. Locations selected were Ft. Stewart, Georgia; Patrick Air Force Base, Florida; Langley Air Force Base, Virginia; Marine Corps Base Camp Pendleton, California; and Pearl Harbor, Hawaii. To select focus group participants, at each site we asked the installation VAO to send out notices requesting volunteers to participate in our focus group discussions. The basic criterion used in soliciting volunteers was that they were eligible to participate in the 2004 election. Topics of discussion for the focus groups included the command’s view on absentee voting, each participant’s awareness and their opinion on the usefulness of FVAP’s absentee voting resources, and challenges faced by servicemembers in voting by absentee ballot. Following each focus group discussion, we administered a short survey to each participant which solicited information related to their absentee voting experiences and challenges. Comments provided by the focus group members cannot be projected across the entire military community because the participants were not selected using a statistically valid sampling methodology. We determined that the data we used were sufficiently reliable for the purpose of our report. We conducted our review from March 2005 through April 2006 in accordance with generally accepted government auditing standards. Election Reform: Nine States’ Experiences Implementing Federal Requirements for Computerized Statewide Voter Registration Lists. GAO-06-247. Washington, D.C.: February 7, 2006. Elections: Views of Selected Local Election Officials on Managing Voter Registration and Ensuring Eligible Citizens Can Vote. GAO-05-997. Washington, D.C.: September 27, 2005. Elections: Federal Efforts to Improve Security and Reliability of Electronic Voting Systems Are Underway, but Key Activities Need to be Completed. GAO-05-956. Washington, D.C.: September 21, 2005. Elections: Additional Data Could Help State and Local Elections Officials Maintain Accurate Voter Registration Lists. GAO-05-478. Washington, D.C.: June 10, 2005. Department of Justice’s Activities to Address Past Election-Related Voting Irregularities. GAO-04-1041R. Washington, D.C.: September 14, 2004. Elections: Electronic Voting Offers Opportunities and Presents Challenges. GAO-04-975T. Washington, D.C.: July 20, 2004. In addition to the individual named above, George F. Poindexter; Connie W. Sawyer, Jr.; Margaret Holihan; Jennifer Thomas; Terry Richardson; Amanda Miller; Cheryl Weissman; and Julia Matta made key contributions to this report.
The narrow margin of victory in the 2000 presidential election raised concerns about the extent to which members of the military, their dependents, and U.S. citizens living abroad were able to vote via absentee ballot. In September 2001, GAO made recommendations to address variances in the Department of Defense's (DOD) Federal Voting Assistance Program (FVAP). Along with the military services and the Department of State (DOS), FVAP is responsible for educating and assisting military personnel and overseas citizens in the absentee voting process. Leading up to the 2004 presidential election, Members of Congress raised concerns about efforts under FVAP to facilitate absentee voting. Because of broad Congressional interest, GAO initiated a review under the Comptroller General's authority to address three questions: (1) How did FVAP's assistance efforts differ between the 2000 and 2004 presidential elections? (2) What actions did DOD and DOS take in response to prior GAO recommendations on absentee voting? and (3) What challenges remain in providing voting assistance to military personnel and overseas citizens? This review is one of several GAO reviews related to various aspects of the 2004 election. GAO provided DOD and DOS with a draft of this report for comment, and they both generally concurred with the report's contents. For the 2004 presidential election, FVAP expanded its efforts beyond those taken for the 2000 election to provide military personnel and overseas citizens tools needed to vote by absentee ballot. With 13 full-time staff members and a fiscal year 2004 budget of about $6 million, FVAP distributed more voting materials and modified its Web site, which includes absentee voting information, and made it accessible to more military and overseas citizens worldwide. It also added an online voting assistance training program and an online version of the Federal Write-in Absentee Ballot. FVAP also conducted 164 voting training workshops for military servicemembers and overseas citizens, as compared to 62 workshops for the 2000 election. In its 2005 report on the effectiveness of its federal voting assistance program, on the basis of its postelection surveys, FVAP attributed higher 2004 voter participation rates to the effective implementation of its voter outreach program. However, because of low survey response rates, GAO has concerns about FVAP's ability to project changes in voter participation rates between the 2000 and 2004 elections. In 2001, GAO recommended that DOD and DOS revise their voting guidance, improve program oversight, and increase command emphasis to reduce the variance in voting assistance to military servicemembers and overseas citizens. DOD and DOS took actions to implement these recommendations; however, absentee voting assistance continued to vary. Voting Assistance Officers (VAOs) provide voting assistance as a collateral duty. Because of competing demands on VAOs and differences in their understanding and interest in the voting process, some variance in absentee voting assistance may always exist. DOD and DOS plan to continue their efforts to improve absentee voting assistance. Despite the efforts of FVAP, DOD, and DOS, GAO identified three challenges that remain in providing absentee voting assistance to military personnel and overseas citizens. One challenge involves simplifying and standardizing the time-consuming, multistep absentee voting process, which has different requirements and time frames established by each state. In attempting to simplify and standardize the absentee voting process, FVAP continued working with the states through its Legislative Initiatives program to facilitate absentee voting for military servicemembers and overseas citizens. Another challenge involves efforts to implement an electronic registration and voting system given persistent issues regarding security and privacy. For the 2004 election, FVAP developed an electronic voting experiment that it planned to make available to the entire military, their dependents, and overseas citizens; however, the experiment was never implemented because of security concerns publicly raised by four of the ten members of a peer review group. A challenge for DOS is having the ability to reach all overseas citizens. Overseas citizens are not required to provide contact information to an embassy or consulate. If these citizens do not provide appropriate contact information, DOS cannot proactively reach these overseas voters.
The NFIP seeks to minimize human suffering and flood-related property losses by making flood insurance available on reasonable terms and encouraging its purchase by people who need flood insurance protection—particularly those living in flood-prone areas known as special flood hazard areas (SFHA). Prior to the flood insurance program’s inception, private insurance companies generally did not offer coverage for flood disasters because of the high risks involved, such as high-risk homeowners being more likely to purchase flood insurance. The National Flood Insurance Act of 1968 (P.L. 90-448) established the program to identify SFHAs, make flood insurance available to property owners living in communities that joined the program, and encourage floodplain management efforts to mitigate flood hazards and thereby reduce federal expenditures on disaster assistance. In order for a community to join the program, any structures built within an SFHA after it has been identified as such are required to be built to the program’s building standards, which are aimed at minimizing flood losses. FEMA estimates that its implementation of the program’s standards for new construction is now saving about $1 billion annually in flood damage avoided. The 1973 Flood Disaster Protection Act (P.L. 93-234) required flood insurance for borrowers whose mortgages are on structures located in SFHAs in participating communities and are originated, guaranteed, or serviced by federal agencies or federally regulated institutions.Subsequently, the National Flood Insurance Reform Act of 1994 (P.L. 103- 325) directed federal regulators of lending institutions to assess penalties on any regulated lending institution found to have a pattern or practice of violating the act. Violations include failing to require flood insurance coverage for properties in SFHAs used to secure mortgage loans. In addition, the act mandated that regulated lenders (1) purchase flood insurance for borrowers who are required to have it but fail to purchase it and (2) escrow funds for flood insurance premiums if other funds are also escrowed. The owners of properties in SFHAs with no mortgages or properties with mortgages held by unregulated lenders are not legally required to buy flood insurance. Because risk levels are the same for homeowners in SFHAs regardless of whether flood insurance is required, FEMA encourages all homeowners residing in SFHAs to buy flood insurance. FEMA’s Mitigation Directorate maintains and updates flood insurance rate maps (FIRM), which identify the geographic boundaries of SFHAs. FIRMs are derived from base maps, which show the basic geographic and political boundaries of a community. Various mapping technologies are used to establish flood elevations on FIRMs and to delineate the boundaries of SFHAs. Base maps are generally obtained from local communities or the U.S. Geological Survey (USGS). While flood maps should be updated as necessary to remain accurate, approximately 63 percent of the nation’s 100,000 flood maps are at least 10 years old. Consequently, the Mitigation Directorate has developed a Flood Map Modernization Plan to update the maps and convert them to a digital format. Digital mapping processes, along with other technologies, will improve the collection of data on structures in SFHAs and allow for the electronic distribution of these data through the Internet and on CD-ROM. In accordance with the Government Performance and Results Act (GPRA), FEMA has established various goals and strategies to determine the success of the NFIP in fulfilling its mission to minimize property losses after flood disasters and to reduce losses from future disasters. According to FEMA officials, these goals allow the agency to monitor its progress in meeting its performance goals and address key outcomes. While the results achieved under these goals—increasing the number of insurance policies in force and reducing flood-related losses—provide valuable insights into how well the NFIP’s mission is being accomplished, they do not gauge participation in the program by the most vulnerable residents—those living in SFHAs. Participation rates—the percentage of structures in SFHAs that are insured—are an effective way to measure the results of the NFIP because they are objective, measurable, and quantifiable. By using participation rates to measure performance, FEMA could assess other program results, such as the extent to which the most vulnerable residents are participating in the program; determine whether the financial risk to the government from floods is increasing or decreasing; and focus marketing and compliance activities to maximize program participation in SFHAs. Like other federal agencies, FEMA is mandated under GPRA to develop annual performance plans that link the agency’s long-term strategic planning to its daily activities. FEMA established three performance goals that pertain to the flood insurance program. These goals include reducing flood losses, increasing the number of flood insurance policies sold, and improving the program’s financial status. These endeavors are part of FEMA’s mission to protect lives and reduce losses from future disasters through insurance and mitigation efforts. Table 1 describes FEMA’s fiscal year 2002 Performance Plan goals for the NFIP and the strategies by which the agency intends to accomplish these goals. In developing annual performance goals, agencies should focus on the results they expect their programs to achieve—the differences the programs will make in people’s lives. The three NFIP performance goals address the program’s objectives of minimizing human suffering and property losses caused by floods. However, opportunities are developing for FEMA to obtain valuable information about the program’s success through analysis of the rate of participation for those communities involved in the program. The participation rate is obtained by dividing the number of properties located in SFHAs with flood insurance by the total number of properties in these SFHAs. This information would allow FEMA to assess whether the program is penetrating those areas most at risk of flooding, determine whether the financial risks to the government in these areas are increasing or decreasing, and better target marketing efforts to increase participation. In other words, through analysis of participation rates, FEMA would be better able to maximize the effectiveness and efficiency of the program in protecting lives and reducing financial losses. FEMA currently collects data on the number of active flood insurance policies. Its goal is to increase the number of NFIP policies in force by 5 percent annually. While FEMA tracks the growth in the number of active policies, its estimates of the number of households located in SFHAs without flood insurance coverage vary. A DeKalb County, Georgia, study illustrates why participation-rate data can be a more useful measure of the program’s success than a tally of policies in force. According to the study, the number of policies in force in DeKalb County grew from the previous year by 13 percent in 1998 and by 17 percent in 1999 but fell to 3 percent in 2000. In fiscal year 1999, DeKalb County officials conducted a study of NFIP participation. This study was initiated to provide information about flood hazards, prevention, and mitigation. Local officials made flood-zone determinations on every structure in the county using FIRMs, tax maps, and limited geographic information system technology. This effort resulted in the creation of an electronic database of the addresses of all structures in the SFHAs. According to the data collected, there were 17,078 buildings in the SFHAs, of which 3,145, or 18 percent, had flood insurance. Thus, while an analysis of the number of policies in force showed significant growth in 1998 and 1999, these data did not capture the fact that fewer than 20 percent of the homeowners in DeKalb County’s SFHAs had flood insurance. FEMA’s policy growth target also does not take into account whether the policy growth is greater or less than the population change in DeKalb County’s SFHAs. For example, a 5-percent increase in the number of policies at a time when the SFHA’s population is increasing by 20 percent may not represent program success for DeKalb County or any other community participating in the NFIP. Nor does the policy growth target take into account changes that occur when flood maps are updated, which could result in the addition of some structures to an SFHA. Such information is important for communities like DeKalb County, where new maps took effect this month. Knowledge of DeKalb County’s participation rate would also help FEMA better market its flood insurance program there. As noted in table 1, marketing and educational outreach efforts are two of FEMA’s strategies to increase the number of policies in force. A 5-percent increase in the number of policies might lead to the erroneous conclusion that DeKalb County did not need additional marketing or outreach campaigns to increase public awareness of flood insurance. A participation rate of 18 percent, however, might indicate that, among other things, additional marketing and educational outreach was necessary for DeKalb County residents. Increasing the share of structures in SFHAs with flood insurance would provide added income to the NFIP’s insurance fund and decrease the financial burden that flooding places on the federal government and the citizens who are victims of floods when uninsured structures suffer flood damage and may qualify for other forms of federal disaster relief. Moreover, increased participation would provide a broader base of policyholders so that the primary objective of insurance—the pooling of risk—would be more fully realized. FIA officials agree that program participation rates are a useful measure that can provide insights for measuring the program’s success, including the effectiveness of marketing. The data currently available to determine flood insurance participation rates within SFHAs are not always accurate or complete. While FIA maintains data on the number of flood insurance policies, the information it has on the total number of structures within SFHAs is poor, according to FIA’s Acting Administrator. FIA acknowledges weaknesses in its estimates of the total number of structures within SFHAs nationwide and is taking steps to obtain more accurate data. New technologies are also becoming available that may be used to estimate the number of structures within floodplains, thereby increasing the reliability of the data needed to determine participation rates. Similarly, local communities are increasingly using these technologies to obtain a more reliable count of the number of structures within SFHAs. While the cost of obtaining more reliable data is not fully known, FEMA is engaging in partnerships to test new technologies that will allow it to share the costs with local communities and other federal agencies. Two numbers are needed to determine participation rates in the NFIP— the number of insured structures and the total number of structures located within SFHAs. When flood insurance policies are sold, private insurance companies that have agreements with FIA to sell NFIP policies collect information on the insured structure, such as whether it is located within an SFHA, its address, and the name of the mortgage lender. They report this information to FIA, which maintains a database on the number of flood insurance policies in force including the number in SFHAs. FEMA also maintains a database containing estimates of the number of structures within SFHAs. However, FIA’s Acting Administrator acknowledges that the data on both the national and local community levels are of varying quality. FEMA has been unable to identify one definitive source of information on the number of structures within SFHAs but is taking steps to obtain more reliable information. FEMA collects data for its Biennial Report on the number of structures within SFHAs from local communities participating in the NFIP. Every 2 years, participating communities report on, among other things, the number of structures within SFHAs as well as within the entire community. However, communities do not always report or provide accurate information. According to a Mitigation Directorate official, about 10 percent of the communities do not report any information. Consequently, older data on the number of structures in these communities are used. Moreover, the communities that do report such information do not always update or report accurate data, since they use different ways to determine the number of structures within SFHAs. For example, some communities have submitted reports showing no increase in the number of structures, but significant increases in population. In other cases, communities reported more structures within the SFHA than within the entire community. According to this official, smaller rural communities may rely on local officials to use their personal knowledge or conduct drive-bys to estimate the number of structures within the SFHA. In contrast, large urban areas typically use technologies such as geographic information systems (GIS) to estimate the number of structures within the SFHA. FIA officials also told us they have information on the number of structures in SFHAs from other databases, but the accuracy of these data is also low. For example, FEMA has a database that estimates the number of structures in SFHAs nationally at six to eight million. However, FIA officials told us that these data are based on the assumption that there is a uniform distribution of structures in SFHAs. Other agencies, such as the U.S. Bureau of the Census, maintain data on street names, addresses, and locations, but their data are not in a format that is useful for determining the number of structures in SFHAs. Similarly, data on the total number of structures cannot be captured from FIRMs, which FEMA currently uses to identify SFHAs, because FEMA’s Mitigation Directorate does not include data on structures on these maps. Existing FIRMs identify only the boundaries of SFHAs, streams, and selected roads. Furthermore, FEMA’s Mitigation Directorate does not use FIRMs to identify structures because (1) FEMA’s regulations on floodplain mapping do not require the depiction of structures on FIRMs; (2) the map scales used for FIRMs are too small to legibly show structures, and enlarging the scales would be cost prohibitive; and (3) the information available on the location of structures is inconsistent. Four studies conducted between 1997 and 2000 that were designed to examine compliance with the mandatory purchase of flood insurance provide some information on participation rates within SFHAs. One study was conducted by FEMA’s Inspector General (IG), one was sponsored by FEMA, and private companies conducted the remaining two. Each of the studies was limited to a few communities; none produced nationally representative results or included all of the structures in the appropriate SFHAs in their analysis. See table 2 for a synopsis of each of these four studies. While these studies provide some useful information, they are of limited value in understanding the percentage of structures in SFHAs covered by flood insurance. several current mapping technologies can be used to facilitate the collection of data on the number of structures in SFHAs. These technologies can be used not only to show buildings and houses on maps but also to pinpoint the exact location of such structures. Combining these technologies with the digital flood maps that FEMA is already producing would allow for increased accuracy in the identification of structures within SFHAs and the calculation of participation rates. For example, USGS has produced computer-generated images of aerial photographs—that is, pictures taken from airplanes of the land below—for about 74 percent of the United States. These images are called digital orthophoto quadrangles (DOQ), and essentially combine the characteristics of a photograph with the geometric qualities of a map. FIA currently uses these images to produce some if its flood maps. While DOQs show pictures of structures, each structure must be digitized in order to be identified by a geographic information system. Local communities are also beginning to use these emerging technologies, although to widely differing degrees. In DeKalb County, Georgia, local officials have purchased DOQs of its 270 square miles from a contractor and digitized the structures in the photos. The county plans to geographically reference each of the structures to create a base map that shows the accurate location of structures. The county can then lay digital flood-maps over its base maps to determine the number of structures in the local SFHAs. According to county officials, once this technology is in place, it will be easy to determine the number of structures in local SFHAs. NFIP participation rates will also be easy to calculate. A DeKalb County official told us that this digitized mapping technology has many practical applications for the county, including engineering, planning and zoning, crime analysis, and disaster recovery, and it will allow maps to be generated for presentations at public hearings and other meetings. FEMA officials told us that similar efforts are occurring in Charlotte, North Carolina, and Louisville, Kentucky. A 1998 survey by the National States Geographic Information Council and the Federal Geographic Data Committee found that 69 percent of the GIS data users from state, regional, and local governments responding to its survey create, update, integrate, and distribute digital geographic data. This indicates that a number of localities have some technology available to create digital base maps and that the potential exists for localities to use such technology to identify structures within SFHAs. However, FIA officials told us that the number of communities that currently have detailed data available is small. They also told us that as more FIRMs are produced digitally and more communities improve the ability of their mapping technologies to collect data on properties and buildings, measuring the number of structures located within SFHAs will become easier and more efficient. The costs of using technology to accurately identify the number of structures in SFHAs are not fully known. In March 2000, FEMA estimated the total costs to modernize flood maps from fiscal year 2001 through fiscal year 2007 to be $773 million above expected annual funding levels, with digitization and map maintenance costs alone totaling $156 million.The modernization of maps includes converting paper flood maps to a digital format, which is the first step in using available technology to identify the number of structures within SFHAs. FEMA continues to refine the cost estimate as it updates its projection of needs and improves its cost data, including the impacts on costs of partnerships with communities and other local, regional, state, and federal agencies, and new technologies. The partnerships that FEMA has developed with state, local, and other federal agencies should reduce some of its costs to modernize its flood maps. Along with enabling the agency to share some of the costs to modernize flood maps, the partnerships will facilitate the development of technology that can be used to estimate the number of structures within SFHAs. For example, through FEMA’s Cooperating Technical Partners initiative, 62 partnerships had been developed with local communities as of September 2000. Through this effort, communities, states, and regional agencies perform all or portions of data collection and mapping tasks to create their own FIRMs. An FIA official told us that the cost benefits to FEMA from this effort have not yet been determined. FEMA has also entered into partnerships with other federal agencies to fund cooperatively the production of DOQs and high-accuracy elevation data. As discussed previously, DOQs provide detailed images of land, including the location of houses. Elevation data are useful because they help make flood maps more accurate. Both of these technologies can be manipulated with geographic information systems to more accurately identify the number of structures within SFHAs. While FIA has factored in the costs of cooperatively producing DOQs with other agencies in its mapping modernization cost estimate, funding arrangements to produce elevation data with other federal agencies have not yet been determined. Program participation rates are an effective way to gain insights into and improve the performance of the NFIP program. Incorporating participation rates into FEMA’s goals can provide results that are in line with GPRA—objective, measurable, and quantifiable. While it will be many years before the data needed to determine national participation rates become available, some communities are already collecting such data. These communities are using technologies that allow them to count the number of structures in SFHAs and some are using these technologies to determine participation rates. As our preceding discussion of DeKalb County, Georgia, demonstrates, such community-level data can provide FIA with useful information on the degree of participation by residents living in SFHAs. In addition to our work on the NFIP, we have two other studies under way involving FEMA. The first responds to your request, in the September 16, 1999, Senate Report (106-161) accompanying the fiscal year 2000 appropriations bill, that we evaluate FEMA’s processes for ensuring that disaster assistance funds are used effectively and efficiently. This report, which we expect to issue this summer, will provide information on (1) the adequacy of the criteria FEMA employs to determine if a presidential disaster declaration is warranted and the consistency with which FEMA applies these criteria and (2) the policies and procedures FEMA has developed to ensure that individual Public Assistance Program projects in disaster areas meet eligibility requirements. We also plan to issue a report in late summer that looks at all federal agencies involved in combating terrorism—including FEMA—with a specific emphasis on (1) the overall framework for managing federal agencies’ efforts; (2) the status of efforts to develop a national strategy, plans, and guidance; (3) the federal government’s capabilities to respond to a terrorist incident; (4) federal assistance to state and local governments to prepare for an incident; and (5) the federal structure for developing and implementing a strategy to combat cyber-based terrorism. For future information on this testimony, please contact JayEtta Hecker at (202) 512-2834. Individuals making key contributions to this testimony included Martha Chow, Lawrence Cluff, Kerry Hawranek, Signora May, John McGrail, Lisa Moore, Robert Procaccini, and John Strauss. Combating Terrorism: FEMA Continues to Make Progress in Coordinating Preparedness and Response (GAO-01-15, Mar. 20, 2001). Disaster Relief Fund: FEMA’s Estimates of Funding Requirements Can Be Improved (GAO/RCED-00-182, Aug. 29, 2000). Observations on the Federal Emergency Management Agency’s Fiscal Year 1999 Performance Report and Fiscal Year 2001 Performance Plan (GAO/RCED-00-210R, June 30, 2000). Disaster Assistance: Issues Related to the Development of FEMA’s Insurance Requirements (GAO/GGD/OGC-00-62, Feb. 25, 2000). Flood Insurance: Information on Financial Aspects of the National Flood Insurance Program (GAO/T-RCED-00-23, Oct. 27, 1999). Flood Insurance: Information on Financial Aspects of the National Flood Insurance Program (GAO/T-RCED-99-280, Aug. 25, 1999). Disaster Assistance: Opportunities to Improve Cost-Effectiveness Determinations for Mitigation Grants (GAO/RCED-99-236, Aug. 4, 1999). Disaster Assistance: Improvements Needed in Determining Eligibility for Public Assistance (GAO/RCED-96-113, May 23, 1996). Flood Insurance: Financial Resources May Not Be Sufficient to Meet Future Expected Losses (GAO/RCED-94-80, Mar. 21, 1994).
This testimony discusses the preliminary results of GAO's ongoing review of the National Flood Insurance Program (NFIP), which is run by the Federal Emergency Management Administration's (FEMA) Federal Insurance Administration (FIA) and Mitigation Directorate, a major component of the federal government's efforts to provide flood assistance. This program creates standards to minimize flood losses. GAO found that FEMA has several performance goals to improve program results, including increasing the number of insurance policies in force. Although these goals provide valuable insight into the degree to which the program has reduced flood losses, they do not assess the degree to which the most vulnerable residents--those living in flood-prone areas--participate in the program. Capturing data on the number of uninsured and insured structures in flood-prone areas can provide FEMA with another indication of how well the program is penetrating those areas with the highest flood risks, whether the financial consequences of floods in these areas are increasing or decreasing, and where marketing efforts can better be targeted. However, before participation rates can be used to measure the program's success, better data are needed on the total number of structures in flood-prone areas. FIA tracks the number of insurance policies in these areas, but data on the overall number of structures are incomplete and inaccurate. Some communities are developing better data on the number of structures in flood-prone areas. FEMA is also trying to improve the quality of its data on the number of structures in flood-prone areas and is working to develop new mapping technologies that could facilitate the collection of such data. The cost of this new technology is not fully known, but the expense will be shared among federal, state, and local agencies.
The U.S. government classifies information that it determines could damage the national security of the United States if disclosed publicly. Currently, all classified information falls under two authorities, one for national defense and foreign relations, the other for nuclear weapons and technology. Beginning in 1940, classified national defense and foreign relations information has been created, handled, and safeguarded in accordance with a series of executive orders. Executive Order 12958, Classified National Security Information, as amended, is the most recent. It establishes the basis for designating National Security Information (NSI). It demarcates different security classification levels, the unauthorized disclosure of which could reasonably be expected to cause exceptionally grave damage (Top Secret), serious damage (Secret), or damage (Confidential). It also lists the types of information that can be classified and describes how to identify and mark classified information. In 2005, about one quarter of DOE classification decisions concerned NSI. The advent of nuclear weapons during World War II, led to a new category of classified information. In 1946, the Congress enacted the Atomic Energy Act, which established a system for governing how U.S. nuclear information is created, handled, and safeguarded. Nuclear information categorized as Restricted Data (RD) or Formerly Restricted Data (FRD) is not governed by Executive Order 12958. RD is defined as data concerning the design, manufacture, or utilization of atomic weapons; production of special nuclear material; and use of special nuclear material in the production of energy. This includes information about nuclear reactors that produce plutonium and tritium, radioactive isotope separation techniques, and the quantities of nuclear materials involved in these processes. FRD relates primarily to data regarding the military use of nuclear weapons. Examples of FRD include weapons stockpile data, weapon yields, the locations of nuclear weapons, and data about weapons safety and storage. Like NSI, classified nuclear information also has three classification levels: Top Secret, Secret, or Confidential. Naval Nuclear Propulsion Information (NNPI) is an exceptional category, which may fall under either of the two classification authorities. NNPI is deemed by both DOE and the Department of Defense (DOD) to be sufficiently sensitive to merit special protections and may be classified under the Atomic Energy Act or Executive Order 12958, depending on its subject and details. Two categories of nuclear information can be withheld from the public without being classified: Unclassified NNPI and Unclassified Controlled Nuclear Information (UCNI). Unclassified NNPI and UCNI are information the government considers sufficiently sensitive to withhold from public release, but not so sensitive as to warrant designation as RD, FRD, or NSI. UCNI is a category created under the authority of the Atomic Energy Act, which enables DOE officials to share information with state and local law enforcement and emergency services personnel who, while lacking security clearances, may have a legitimate need to know operational details about, for example, planned shipments of special nuclear materials. According to the current executive order, documents containing only NSI must be “portion marked,” for instance, classified paragraph-by-paragraph. For example, a document containing NSI may have paragraphs classified as Top Secret, Secret, or Confidential, along with others that are unclassified. However, documents containing any RD or FRD are classified in their entirety at the level of the most sensitive information in the document. Portion marking of documents containing RD and FRD is not required by the Atomic Energy Act and is discouraged by DOE policy. Executive Order 12958, as amended, states that NSI shall be declassified as soon as it no longer meets the standards for classification. The point at which information is to be declassified is set when the decision is made to classify it, and it is linked to an event, such as a completed mission, or to a period of time. Classified records that are older than 25 years and have permanent historical value are automatically declassified unless an exemption is granted because their contents still remain sensitive and their release could harm national security. Agencies have adopted processes to facilitate declassification in compliance with the executive order. Unlike documents containing NSI, documents containing RD or FRD are not reviewed automatically for possible declassification. The reason for this is that these two categories are mostly scientific and technical and may not become less sensitive with the passage of time. In fact, such data may be useful to nations and terrorist groups that are trying to build nuclear weapons. At a time of increased concern about nuclear proliferation, some of the oldest and simplest nuclear technology can be useful for making weapons of mass destruction. For this reason, documents about nuclear weapons and technologies from the 1940s and 1950s remain especially sensitive and worthy of protection. DOE implements the executive order and classification statutes by issuing departmental regulations, directives, and extensive use of classification guides. DOE’s directive, Identifying Classified Information, is the department’s comprehensive guide to classifying, declassifying, marking, and protecting information, documents, and material. The directive also establishes policies and procedures, such as departmentwide training and certification requirements for staff authorized to classify or declassify information, and for periodic self-assessments. Classification guides are manuals specifying precisely which DOE information must be classified, how it should be categorized (NSI, RD, or FRD), and at what level (Top Secret, Secret, or Confidential) it should be protected. DOE has a detailed and comprehensive set of classification guides that are integral to efficient functioning of the department’s classification activities. The department limits the use of “source documents” for the purpose of making classification decisions. Source documents may be used to classify documents containing NSI, but only when there is no guidance available. For example, if a DOE classifier is evaluating a new document with the same information found in another document already classified as Secret, then this new document may also be classified as Secret. RD and FRD documents can never be used as source documents. DOE’s Office of Classification’s systematic training, comprehensive guidance, and rigorous oversight programs had a largely successful history of ensuring that information was classified and declassified according to established criteria. DOE’s training requirements and classification guidance are essential internal controls that provide a strong framework for minimizing the risk of misclassification. However, since responsibility for classification oversight was shifted from the Office of Classification to the Office of Security Evaluations in October 2005, the pace of oversight was interrupted—creating uncertainty about how oversight will be performed and whether it will continue to be effective. Systematic training requirements are an important element of DOE’s framework for maximizing the proper classification of documents. Only staff that have successfully completed training are authorized to classify or declassify documents. Staff must be recertified as classifiers and/or declassifiers every 3 years, in order to retain their authority. Staff are typically trained as “derivative classifiers” and, in some cases, as “derivative declassifiers” as well. They are limited in their authority to those areas in which they have special knowledge and expertise and are only authorized to classify (or declassify) documents “derivatively”—that is, only if the document in question contains information a DOE or other U.S. government agency classification guide specifically requires be classified or declassified. There are currently about 4,600 derivative classifiers in DOE, nearly all of whom do classification work only as a collateral duty. For example, most derivative classifiers in DOE are scientists, engineers, or other technically trained people who work in programs or areas involving classified information that need staff who can properly classify the documents these programs produce. Relatively few DOE staff (just 215 as of May 2006) are authorized to declassify documents. Because a declassified document may become publicly available, derivative declassifiers are among the most experienced derivative classifiers. Only original classifiers, of which there are currently 25 throughout the DOE complex, are authorized to classify previously unclassified information. All DOE original classifiers are either very senior, full-time classification professionals, such as the director and deputy director of the Office of Classification, or one of the department’s top-level political appointees, such as the Administrator, National Nuclear Security Administration. DOE has developed an extensive collection of more than 300 classification guides, or manuals, specifying precisely which DOE information must be classified, how it should be categorized, and at what level (Top Secret, Secret, or Confidential) it should be protected. The Office of Classification oversees the regular updating of all classification guides used in DOE and must ultimately approve the use of every guide. DOE prohibits classification decisions based on source documents for documents containing RD and FRD and permits their use only when no guidance is available for documents containing NSI from other federal agencies. The Information Security Oversight Office considers the use of classification guides to be a best practice because they provide a singular, authoritative voice that is less open to individual interpretation or confusion than source documents and so using these guides are less likely to result in errors. According to the Information Security Oversight Office, DOE’s use of classification guides is among the most extensive in the federal government. Classification guides are integral to the efficient functioning of the department’s classification program. Some classification guides are more general in nature, such as those dealing with physical security, and are used widely throughout DOE. Others, known as “local guides,” are used at a few or even a single site because they provide guidance specific to a single DOE program or project. For example, a classification guide used by contractors working on a decontamination and clean-up project at a site in Oak Ridge, Tennessee, provides specific guidance on nuclear waste and storage unique to this site. DOE has also implemented an extensive and rigorous oversight program. From 2000 through 2005, the Office of Classification and its predecessor offices have conducted on-site inspections of classification activities at 34 DOE field offices, national laboratories, and weapons manufacturing facilities. In calendar years 2004 and 2005, the Office of Classification conducted an average of 10 oversight inspections a year. Classification activities were evaluated in depth in eight different functional areas, including site-provided classification training, self-assessment efforts, and overall senior management support for (and awareness of) classification activities. To this end, before a team of 3 to 10 Office of Classification inspectors arrived, it would send the site’s classification officer a “data call” requesting detailed and specific answers to dozens of questions about the procedures and practices of the site’s classification program. For example, to ascertain how effectively classification guidance was being used, requests were made for information about what guidance was in use at the site; the names of authorized classifiers who had guides; whether there were any local (site-specific) guides in use, and if so, when were they last validated by Office of Classification officials. Similarly detailed requests for information were requested about each of the other classification program elements. Having such detailed information in hand prior to arrival at the site allowed inspection teams to undertake a comprehensive evaluation in just 2 to 5 days because they could focus more on validating the information provided in the data call than on undertaking the time-consuming task of gathering data themselves. The Office of Classification staff’s expertise in classification matters is augmented with subject area experts. For example, to ensure the inspection team had adequate expertise to make valid assessments of classification decisions about nuclear weapons design at Los Alamos National Laboratory, a staff member with nuclear weapons design experience was assigned to the team. Moreover, in many cases, members of the inspection team had more than 20 years of classification experience. As a result of the extensive information provided by the data call, and the level of experience of the inspection team, generally the team submitted a draft inspection report to the site’s classification officer before leaving. It is DOE policy that any findings requiring immediate correction resulted in the creation of a corrective action plan, which had to be completed within 60 days of the inspection. DOE officials told us progress on implementing corrective action plans was reported to the Office of Classification quarterly. In September 2005, the Information Security Oversight Office reviewed DOE’s classification program just prior to the shift in responsibility for classification oversight. Officials at the Information Security Oversight Office found DOE’s program to be much better than the average federal agency. They singled out DOE’s training program and extensive use of classification guidance as especially impressive. One official called DOE’s program for ensuring that all staff authorized to classify and declassify documents were recertified every 3 years “outstanding.” Another official called DOE’s extensive use of classification guides a “best practice.” Overall, Information Security Oversight Office officials were impressed with DOE’s classification program, noting that robust oversight is a very important part of an effective program for managing classified information. Since responsibility for classification oversight was shifted from the Office of Classification to the Office of Security Evaluations, the pace of oversight was interrupted—creating uncertainty about how oversight will be performed and whether it will continue to be effective. The Office of Security Evaluations is the DOE office responsible primarily for the oversight of physical security at DOE sites, with a special emphasis on Category 1 sites (sites containing special nuclear materials). Since October 2005, the Office of Security Evaluations has completed one inspection of two offices at the Pantex Site in Texas and another inspection of four offices at the Savannah River Site is under way. In April 2006, Office of Security Evaluations officials provided us plans for performing additional oversight inspections for the remainder of 2006. These plans included inspections evaluating classification activity at eight DOE offices at three additional sites. Classification oversight has been incorporated into larger oversight efforts on physical security at DOE sites. Classification oversight ceased from October 2005 until February 2006 when the Office of Security Evaluations began its inspection of two offices at the Pantex Plant, a nuclear weapons manufacturing facility in Texas. Before the shift in responsibility, DOE officials did not conduct any risk assessment of the likely effects on the classification oversight program of the shift for three reasons: (1) they did not consider the shift to be a significant organizational or management challenge because the upper- level management remained the same; (2) the Office of Security Evaluations would continue to draw on many of the same experienced Office of Classification staff who have been performing classification oversight for many years; and (3) responsibility for other key internal controls for managing classification activities, namely training and guidance, would remain with the Office of Classification. The director of the Office of Security Evaluations and the acting deputy director of the Office of Classification told us that the goal of shifting responsibility for classification oversight from one office to the other was to consolidate all oversight functions in one area. The idea arose in the course of a periodic reassessment of the organization of the Office of Security and Safety Performance Assurance—the larger organization of which these and several other offices are part—and a judgment by senior DOE management that one group should do all the oversight. The Office of Security Evaluations seemed the most logical place to locate classification oversight, according to senior DOE management. DOE officials also told us that the Office of Security and Safety Performance Assurance was not the only part of DOE affected by this drive to consolidate functions in single offices, and there was no intent to downgrade oversight. According to the Director of the Office of Security Evaluations, the procedures for conducting future oversight are still evolving—including the numbers of sites to be inspected and the depth of analysis to be performed. The office currently plans to evaluate classification activities at 14 offices within five DOE sites in calendar year 2006, integrating classification oversight into its regularly scheduled inspections of Category 1 sites with inspections at a few non-Category 1 sites. The director of the Office of Security Evaluations said the goal is to visit each of DOE’s 10 Category 1 sites every 2 years. However, this schedule has been recently delayed as the office has been tasked by senior DOE management to perform security reviews in other areas of DOE operations. Now that classification oversight is a component within the much larger oversight agenda of the Office of Security Evaluations—one focused on the physical security of DOE’s most sensitive sites—it raises uncertainty about whether classification oversight will have a diminished priority than when it was solely an Office of Classification responsibility. However, if all of the visits planned for 2006 are completed, then the Office of Security Evaluations will be conducting oversight at a pace similar to what was done prior to October 2005. As classification oversight is now the responsibility of the Office of Security Evaluations—and will be reported as one component in a much larger report on the overall security of DOE sites—it is unclear if the new format will have the same depth of analysis or be as comprehensive, detailed, and useful as the format used by the Office of Classification. The Office of Security Evaluations reports are bigger and have a much higher profile with senior DOE management than reports by the Office of Classification. As such, they are written to convey information to a broader and less technically oriented audience. Each element of security is rated as “effective performance” (green), “needs improvement” (yellow), or “significant weakness” (red). To accommodate this shift, the format for reporting the results of inspections of classification activities has changed to fit into this larger, well-established Office of Security Evaluations reporting format. These reports have relatively brief executive summaries but are supplemented by several appendixes, one for each component of site security. The executive summary includes the highlights of the inspection, an overall evaluation of security at the site, the formal findings (that is, deficiencies uncovered), and a brief scope and methodology section (which includes a listing of the personnel participating in the inspection). It is uncertain if the results of the inspection of classification activities will be included in the executive summary, or if this depends on whether the results are particularly noteworthy. Not all aspects of an inspection will be mentioned in the summary section, and most of what is reported on classification and other topics will be in their respective appendixes. The Office of Security Evaluation’s full report will be classified because it will contain information on the vulnerabilities in site security. However, according to the Office’s director, the appendix on classification will likely be unclassified. Since the shift in responsibility, the Office of Security Evaluations has completed one classification inspection of two offices at the Pantex Site; and the new procedures for oversight are still evolving. It is uncertain whether the reporting on classification oversight will be as detailed, specific, and, ultimately, as useful as it was prior to the October 2005 shift in responsibility. While the overall reporting format for the Office of Security Evaluations reports is firmly in place, the director of the office told us that the details of how to assess the effectiveness of the classification program is still evolving. Initially, the Office of Security Evaluations plans to gather similarly detailed and comprehensive information from the sites it inspects using the same “data call” as the Office of Classification; the data call requests detailed and specific answers to dozens of questions about the procedures and practices of the site’s classification program. The director of the Office of Security Evaluations stressed—and the deputy director of the Office of Classification agreed—that they plan to have the information reported in the classification appendix written in language similar to that in Office of Classification reports, and findings and recommendations for improvement will be conveyed in language no less specific and “actionable” than in the previous reports. Nonetheless, until the Office of Security Evaluations performs several classification inspections and establishes its own record of accomplishment in overseeing DOE classification activities, it is not clear whether oversight will be as effective as it was before the shift in responsibility. Without continued effectiveness, DOE classification activities could become less reliable and more prone to misclassification. On the basis of reviews of over 12,000 classified documents totaling nearly a quarter million pages at 34 sites between 2000 and 2005, DOE officials have found that very few documents are misclassified. Office of Classification inspectors found 20 documents had been misclassified, an error rate of about one-sixth of 1 percent. At more than two-thirds of the sites (25 of 34) inspectors found no classification errors. The most misclassified documents that these inspectors found at any site were five, at the Los Alamos National Laboratory in May 2005. Four of these documents were classified, but not at the proper level or category. A fifth document containing nuclear weapons information should have been classified but was unclassified and found in the laboratory’s technical library. (See table 1.) Most misclassified documents remained classified, just not at the appropriate level or category. Of greater concern would be documents that should be classified but mistakenly are not. When mistakenly not classified, such documents may end up in libraries or on DOE Web sites where they could reveal sensitive RD and FRD to the public. When documents are not classified but should be, these errors can only be uncovered through some form of oversight, such as the document reviews that occurred in preparation for, and during, Office of Classification inspections. For example, during an inspection at the Sandia National Laboratories in March 2005, Office of Classification inspectors reviewed more than 170 unclassified documents in the laboratory’s holdings and found 2 documents that contained classified information. Without systematic oversight, these kinds of errors are unlikely to be discovered and corrected. While DOE’s extensive document reviews provided depth and rigor to its oversight inspections, two notable shortcomings in this process were (1) the inconsistent way that inspectors gained access to the many documents they would review and (2) the failure to adequately disclose these procedures in their reports. At the six DOE sites we visited, the procedures that the Office of Classification inspection teams used to obtain documents varied widely. For example, at the Los Alamos National Laboratory, inspectors were granted unfettered access to any storage vault and library, and they themselves chose the documents for review. Once in the vault or library, inspectors used the document indexes or interviewed the librarians to decide which documents and topics were recently classified or declassified. The inspectors requested the documents of most interest, or they browsed in the collection and pulled files randomly from the shelves. By contrast, at the NNSA Service Center in Albuquerque, site officials selected documents from several different locations, and then inspectors chose from among them. By not being able to select their own samples, Office of Classification inspectors limited their independence—which could possibly undermine the credibility of their findings. Because DOE does not have a complete inventory of its classified documents, it cannot select a strictly random sample. Nonetheless, DOE officials acknowledged they could improve their selection procedures to make them more consistent and random. Furthermore, in the 34 inspection reports we analyzed, Office of Classification inspectors did not disclose to the reader key facts about how information was gathered, what limitations they agreed to, and how this affected their findings. According to Standards for Internal Control in the Federal Government, independent inspections should properly document and report on the processes they use in their evaluations. The Office of Classification’s reports provided no detail about how documents were chosen. Such detail would increase public confidence that DOE’s classification oversight is transparent and robust. Since the 1950s, the DOE’s Office of Classification and its predecessor organizations have developed strong systems of internal controls for managing classified information. At the core of these systems are (1) DOE’s requirement that staff authorized to classify documents must complete training and be periodically recertified, (2) its comprehensive guidance, and (3) its program of regular and rigorous oversight to ensure that DOE sites are following agency classification policies. These training, guidance, and oversight programs have provided a proven framework that has contributed to DOE’s success in managing classified information. However, the recent reduction in oversight activity following a shift in responsibilities raises questions about whether this framework will continue to be as strong. If the oversight inspections planned for the remainder of 2006 are effectively completed, it will demonstrate resumption in the pace of oversight conducted prior to October 2005. However, if these inspections are not completed, or are not as comprehensive, then the extent and depth of oversight will be diminished and may result in DOE classification activities becoming less reliable and more prone to misclassification. In addition, by implementing more random selection procedures for identifying classified documents to review—and by disclosing these procedures clearly in their reports—DOE has the opportunity to assure both itself and the public that its oversight is, indeed, effective. DOE is the agency most responsible for safeguarding the nation’s nuclear secrets, and its classification and declassification procedures are especially vital to national security. At a time when risks of nuclear proliferation are increasing, it is imperative that DOE build on its past successes in order to continue to be effective. To help ensure that DOE classification activities remain effective and result in documents that are classified and declassified according to established criteria, we recommend that the Secretary of Energy take the following three actions: ensure that the classified information oversight program provides oversight to a similar number of DOE sites, as it did before October 2005, and provides a similar depth of analysis; strengthen the review of classified documents by applying selection procedures that more randomly identify documents for review; and disclose the selection procedures used for documents for review in future classification inspection reports. In commenting on the draft of this report, DOE agreed with the findings and recommendations of the report. DOE was pleased that its classification program is being recognized as particularly effective in protecting information vital to national security. However, while DOE agreed with our recommendation that steps be taken to ensure that the classification oversight program provide oversight to a similar number of sites at a similar depth of analysis, it asserted that it is in fact already taking the needed actions and has, overall, “retained the effective framework previously established by the Office of Classification.” Although we are encouraged by DOE’s efforts, until the agency establishes a record of accomplishment under the new organizational structure, it will not be clear whether oversight will be as effective as it has been in the past. DOE also concurred with our recommendations to strengthen the review of classified documents by applying selection procedures that more randomly identify documents for review and disclose these procedures in future reports and outlined steps it will take to implement these two recommendations. Comments from DOE’s Director, Office of Security and Safety Performance Assurance are reprinted in appendix II. DOE also provided technical comments, which we incorporated into the report as appropriate. We are sending copies of this report to the Secretary of Energy; the Director, Office of Management and Budget; and interested congressional committees. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made major contributions to this report are listed in appendix III. The Department of Energy (DOE) classifies and declassifies information under authorities granted by the Atomic Energy Act, first passed in 1946, and under presidential executive orders governing national security information. These authorities and corresponding implementing directives provide for three classification levels: Top Secret, Secret, and Confidential. DOE uses three categories to identify the different types of classified information: Restricted Data, Formerly Restricted Data, and National Security Information. In addition to classified information, certain types of unclassified information are sensitive and require control to prevent public release. The markings used and the controls in place depend on the statutory basis of the unclassified control system and vary in DOE, from Official Use Only information to Unclassified Controlled Nuclear Information. At a practical level, unclassified information is controlled or not controlled, depending on its sensitivity, any overriding public interest requiring release, or operational considerations involving the benefit of control versus the cost of control (for example, it must be shared with uncleared state or local government officials). The information presented below is a summary of the various levels and categories used by DOE to classify and control information. All classified information and documents are classified at one of three levels, listed in descending order of sensitivity: Top Secret (TS), Secret (S), or Confidential (C). Classified under authority of the Atomic Energy Act (AEA) of 1954, as amended. Defined in the AEA as all data concerning: the design, manufacture, or utilization of atomic weapons; and the production of special nuclear material. Examples include: (1) Production reactors (2) Isotope separation (gaseous diffusion, gas centrifuge, laser isotope separation). The use of special nuclear materials in the production of energy. Examples include: (1) naval reactors, and (2) space power reactors. But not information declassified or removed from the RD category. Documents are not portion marked–an entire document is classified at the level of the most sensitive information contained in the document. Classified under authority of the AEA of 1954, as amended. Information that has been removed from the RD category because DOE and the Department of Defense have jointly determined that the information (1) now relates primarily to the military utilization of atomic weapons and (2) can be adequately safeguarded as defense information. Examples include: weapon stockpile quantities, weapons safety and storage, weapon yields, and weapon locations. Documents are not portion marked. Classified under the authority of Executive Order 12958, as amended. Information that pertains to the national defense or foreign relations of the United States and classified in accordance with the current executive order as Top Secret, Secret, or Confidential. NSI documents may be classified up to a 25 year limit unless containing information that has been approved for exemption from declassification under Executive Order 12958, as amended, and based on an approved declassification guide. For example, DOE treats certain nuclear-related information that is not RD or FRD, such as security measures for nuclear facilities, as exempt from declassification until such facilities are no longer in use. Many of these facilities have been in use for over 50 years. Documents are portion marked by paragraph. Confidential Foreign Government Information – Modified Handling Authorized (C/FGI-MOD) An agency must safeguard foreign government information under standards providing a degree of protection at least equivalent to that required by the government or international organization that furnished the information. If the FGI requires a level of protection lower than that for Confidential, the United States can, under Executive Order 12958 section 4.1(h), classify and protect it as C/FGI-MOD, which provides protection and handling instructions similar to that provided to United States Official Use Only. Before C/FGI-MOD was created, the only legal way for such information to be controlled was at the Confidential level, which resulted in over-protection, increased security cost, and operational complexity. Each classified document must be marked to show its classification level (and classification category if RD or FRD), who classified it, the basis for the classification, and the duration of classification (if NSI). Lack of a category marking indicates the classified document is NSI. A document containing only NSI must be portion marked. An RD document, for example, will be marked TSRD (Top Secret Restricted Data), showing the classification level and category. RD documents are similarly marked SRD (Secret Restricted Data), or CRD (Confidential Restricted Data). A document should never simply be marked “RD.” The same rules apply to FRD information (TSFRD, SFRD, and CFRD). A classified document that is not RD or FRD is an NSI document. NSI documents are marked as TSNSI (Top Secret National Security Information), SNSI (Secret National Security Information), or CNSI (Confidential National Security Information); or simply Top Secret, Secret, or Confidential. Controlled under authority of the AEA of 1954, as amended. the design of nuclear material production facilities or utilization facilities; security measures for protecting such facilities, nuclear material contained in such facilities, or nuclear material in transit; The design, manufacture, or utilization of any atomic weapon or component if it has been declassified or removed from the RD category. UCNI markings – A document containing UCNI must be marked at the top and bottom of each page with “Unclassified Controlled Nuclear Information” or “UCNI” and include, on the front of the document, a marking that identifies the Reviewing Official making the determination, the date of the determination, and the guidance used. Unclassified information that may be exempt from public disclosure under provisions of the Freedom of Information Act (FOIA) that is not otherwise subjected to a formally implemented control system. A decision to control information as OUO does not mean that such information is automatically exempt from disclosure if requested under the FOIA. That determination is made by a FOIA Authorizing Official only when the document is requested. The OUO marking merely serves as a warning that the document reviewer considers the information to be sensitive and indicates why by including on the document the FOIA exemption that the document reviewer thinks applies. OUO markings – Documents determined to contain OUO information are and when they do, they state which FOIA exemption applies. This classification guide is then cited on the OUO stamp.) NNPI concerns all classified and controlled unclassified information related to the naval nuclear propulsion program. This marking supplements existing classification and control systems and is not a separate category outside of the authorities provided under the AEA or Executive Order 12958 for, as an example, classified NNPI. The use of “NNPI” is an additional marking applied to some of the previously defined categories of information to indicate additional controls for protection or access. Classified under the authority of the AEA of 1954, as amended, or Executive Order 12958, as amended. All classified information concerning the design, arrangement, development, manufacture, testing, operation, administration, training, maintenance, and repair of propulsion plants of naval nuclear powered ships and prototypes, including associated shipboard and shore-based nuclear support facilities. Markings can be RD or NSI. C-NNPI documents containing RD information are marked TSRD, SRD, or CRD. C-NNPI NSI documents are typically marked Secret NOFORN (“not Documents containing information classified under the authority of the AEA are not portion marked. Controlled in accordance with Naval Sea Systems Command Instruction C5511.32B and protected pursuant to export control requirements and statutes. All unclassified but controlled information concerning the design, arrangement, development, manufacture, testing, operation, administration, training, maintenance, and repair of propulsion plants of naval nuclear powered ships and prototypes, including associated shipboard and shore-based nuclear support facilities. U-NNPI documents will be marked and controlled as NOFORN (not releasable to foreign nationals). In addition, Nancy Crothers, Robin Eddington, Doreen Feldman, William Lanouette, Greg Marchand, Terry Richardson, Kevin Tarmann, and Ned Woodward made significant contributions to this report.
In recent years, the Congress has become increasingly concerned that federal agencies are misclassifying information. Classified information is material containing national defense or foreign policy information determined by the U.S. government to require protection for reasons of national security. GAO was asked to assess the extent to which (1) DOE's training, guidance, and oversight ensure that information is classified and declassified according to established criteria and (2) DOE has found documents to be misclassified. DOE's Office of Classification's systematic training, comprehensive guidance, and rigorous oversight programs had a largely successful history of ensuring that information was classified and declassified according to established criteria. However, an October 2005 shift in responsibility for classification oversight to the Office of Security Evaluations has created uncertainty about whether a high level of performance in oversight will be sustained. Specifically, prior to this shift, the Office of Classification had performed 34 inspections of classification programs at DOE sites since 2000. These inspections reviewed whether DOE sites complied with agency classification policies and procedures. After the October 2005 shift, however, the pace of this oversight was interrupted as classification oversight activities ceased until February 2006. So far in 2006, one classification oversight report has been completed for two offices at DOE's Pantex Site in Texas, and work on a second report is under way at four offices at the Savannah River Site in South Carolina. More oversight inspections evaluating classification activity at eight DOE offices are planned for the remainder of 2006. In addition, according to the Director of the Office of Security Evaluations, the procedures for conducting future oversight are still evolving--including the numbers of sites to be inspected and the depth of analysis to be performed. If the oversight inspections planned for the remainder of 2006 are completed, it will demonstrate resumption in the pace of oversight conducted prior to October 2005. However, if these inspections are not completed, or are not as comprehensive as in the past, the extent and depth of oversight will be diminished and may result in DOE classification activities becoming less reliable and more prone to misclassification. On the basis of reviews of classified documents performed during its 34 oversight inspections, the Office of Classification believes that very few of DOE's documents had been misclassified. The department's review of more than 12,000 documents between 2000 and 2005 uncovered 20 documents that had been misclassified--less than one-sixth of 1 percent. DOE officials believe that its misclassification rate is reasonable given the large volume of documents processed. Most misclassified documents remained classified, just not at the appropriate level or category. Of greater concern are the several documents that should have been classified but mistakenly were not. When mistakenly not classified, such documents may end up in libraries or on DOE Web sites where they could reveal classified information to the public. The only notable shortcomings we identified in these inspections were the inconsistent way the Office of Classification teams selected the classified documents for review and a failure to adequately disclose these procedures in their reports. Inspection teams had unfettered access when selecting documents to review at some sites, but at others they only reviewed documents from collections preselected by site officials. Office of Classification reports do not disclose how documents were selected for review.
Over the past decade, DOD has increasingly relied on contractors to provide a range of mission-critical services from operating information technology systems to providing logistical support on the battlefield. The growth in spending on services clearly illustrates this point. DOD’s obligations on service contracts, expressed in constant fiscal year 2006 dollars, rose from $85.1 billion in fiscal year 1996 to more than $151 billion in fiscal year 2006, a 78 percent increase. More than $32 billion—or 21 percent—of DOD’s obligations on services in fiscal year 2006 were for professional, administrative, and management support contracts. Overall, according to DOD, the amount obligated on service contracts exceeded the amount the department spent on supplies and equipment, including major weapon systems. Several factors have contributed to the growth in service contracts. For example, after the September 2001 terrorist attacks, increased security requirements and the deployment of active duty and reserve personnel resulted in DOD having fewer military personnel to protect domestic installations. For example, the U.S. Army awarded contracts worth nearly $733 million to acquire contract guards at 57 installations. Growth was also caused by changes in the way DOD acquired certain capabilities. For example, DOD historically bought space launch vehicles, such as the Delta and Titan rockets as products. Now, under the Evolved Expendable Launch Vehicle program, the Air Force purchases launch services using contractor-owned launch vehicles. Similarly, the Air Force and Army turned to service contracts for simulator training primarily because efforts to modernize existing simulator hardware and software had lost out in the competition for procurement funds. Buying training as a service meant that operation and maintenance funds could be used instead of procurement funds. Overall, however, our work found that to a large degree, this growth simply happened and was not a managed outcome. As the amount and complexity of contracting for services have increased, the size of the civilian workforce has decreased. More significantly, DOD carried out this downsizing without ensuring that it had the requisite skills and competencies needed to manage and oversee service acquisitions. Consequently, DOD is challenged in its ability to maintain a workforce with the requisite knowledge of market conditions, industry trends, and the technical details about the services they procure; the ability to prepare clear statements of work; and the capacity to manage and oversee contractors. Participants in an October 2005 GAO forum on Managing the Supplier Base for the 21st Century commented that the current federal acquisition workforce significantly lacks the new business skills needed to act as contract managers. In June 2006, DOD issued a human capital strategy that acknowledged that DOD’s civilian workforce is not balanced by age or experience. DOD’s strategy identified a number of steps planned over the next 2 years to more fully develop a long-term approach to managing its acquisition workforce. For example, DOD’s Director of Defense Procurement and Acquisition Policy testified in January 2007 that DOD has been developing a model that will address the skills and competencies necessary for DOD’s contracting workforce. That model will be deployed this year. The Director stated that this effort would allow DOD to assess the workforce in terms of size, capability, and skill mix, and to develop a comprehensive recruiting, training, and deployment plan to meet the identified capability gaps. A report we issued in November 2006 on DOD space acquisition provides an example of downsizing in a critical area—cost estimating. In this case, there was a belief within the government that cost savings could be achieved under acquisition reform initiatives by reducing technical staff, including cost estimators, since the government would be relying more on commercial-based solutions to achieve desired capabilities. According to one Air Force cost-estimating official we spoke with, this led to a decline in the number of Air Force cost estimators from 680 to 280. According to this official, many military and civilian cost-estimating personnel left the cost-estimating field, and the Air Force lost some of its best and brightest cost estimators. In turn, because of the decline in in-house resources, space program offices and Air Force cost-estimating organizations are now more dependent on support from contractors. For example, at 11 space program offices, contractors accounted for 64 percent of cost- estimating personnel. The contractor personnel now generally prepare cost estimates while government personnel provide oversight, guidance, and review of the cost-estimating work. Reliance on support contractors raises questions from the cost-estimating community about whether numbers and qualifications of government personnel are sufficient to provide oversight of and insight into contractor cost estimates. Turning to Iraq, DOD has relied extensively on contractors to undertake major reconstruction projects and provide support to troops in Iraq. DOD is responsible for a significant portion of the more than $30 billion in appropriated reconstruction funds and has awarded and managed many of the large reconstruction contracts, such as the contracts to rebuild Iraq’s oil, water, and electrical infrastructure, as well as to train and equip Iraqi security forces. Further, U.S. military operations in Iraq have used contractors to a far greater extent than in prior operations to provide interpreters and intelligence analysts, as well as more traditional services such as weapons systems maintenance and base operations support. These services are often provided under cost-reimbursement-type contracts, which allow the contractor to be reimbursed for reasonable, allowable, and allocable costs to the extent prescribed in the contracts. Further, these contracts often contain award fee provisions, which are intended to incentivize more efficient and effective contractor performance. If contracts are not effectively managed and given sufficient oversight, the government’s risk is likely to increase. For example, we have reported that DOD needs to conduct periodic reviews of services provided under cost-reimbursement contracts to ensure that services are being provided and at an appropriate level and quality. Without such a review, the government is at risk to pay for services it no longer needs. Our work, along with that of the Inspectors General, has repeatedly found problems with the practices DOD uses to acquire services. Too often, the department obtains services based on poorly defined requirements and inadequate competition. Further, DOD’s management and use of contractors supporting deployed forces suffers from the lack of clear and comprehensive guidance, among other shortfalls. Similarly, DOD does not always oversee and manage contractor performance, in part due to capacity issues, once a contract is in place. Many of these problems show up in the department’s use of other agencies’ contracts. Collectively, these problems expose DOD to unnecessary risk, complicate efforts to hold DOD and contractors accountable for poor acquisition outcomes, and increase the potential for fraud, waste, or abuse of taxpayer dollars. Poorly defined or broadly described requirements have contributed to undesired service acquisition outcomes. To produce desired outcomes within available funding and required time frames, DOD and its contractors need to clearly understand acquisition objectives and how they translate into the contract’s terms and conditions. The absence of well-defined requirements and clearly understood objectives complicates efforts to hold DOD and contractors accountable for poor acquisition outcomes. Contracts, especially service contracts, often do not have definitive or realistic requirements at the outset needed to control costs and facilitate accountability. This situation is illustrated in the following examples: In June 2004, we found that during Iraqi reconstruction efforts, when requirements were not clear, DOD often entered into contract arrangements that introduced risks. We reported that DOD often authorized contractors to begin work before key terms and conditions, such as the work to be performed and its projected costs, were fully defined. In September 2006, we reported that, under this approach, DOD contracting officials were less likely to remove costs questioned by auditors if the contractor had incurred these costs before reaching agreement on the work’s scope and price. In one case, the Defense Contract Audit Agency questioned $84 million in an audit of a task order for an oil mission. In that case, the contractor did not submit a proposal until a year after the work was authorized, and DOD and the contractor did not negotiate the final terms of the contract until more than a year after the contractor had completed the work. We will issue a report later this year on DOD’s use of undefinitized contract actions. In July 2004, we noted that personnel using the Army’s Logistics Civil Augmentation Program (LOGCAP) contract in Iraq, including those who may be called upon to write statements of work and prepare independent government cost estimates, had not always received the training needed to accomplish their missions. We noted, for example, the statement of work required the contractor to provide water for units within 100 kilometers of designated points but did not indicate how much water needed to be delivered to each unit or how many units needed water. Without such information, the contractor may not be able to determine how to meet the needs of the Army and may take unnecessary steps to do so. Further, we have reported that contract customers need to conduct periodic reviews of services provided under cost-reimbursable contracts to ensure that services provided are supplied at an appropriate level. Without such a review, the government is at risk of paying for services it no longer needs. For example, the command in Iraq lowered the cost of the LOGCAP contract by $108 million by reducing services and eliminating unneeded dining facilities and laundries. Competition is a fundamental principle underlying the federal acquisition process. Nevertheless, we have reported on the lack of competition in DOD’s acquisition of services since 1998. We have reported that DOD has, at times, sacrificed the benefits of competition for expediency. For example, we noted in April 2006 that DOD awarded contracts for security guard services supporting 57 domestic bases, 46 of which were done on an authorized, sole-source basis. The sole-source contracts were awarded by DOD despite recognizing it was paying about 25 percent more than previously paid for contracts awarded competitively. In this case, we recommended that the Army reassess its acquisition strategy for contract security guards, using competitive procedures for future contracts and task orders. DOD agreed and is in the process of revising its acquisition strategy. In Iraq, the need to award contracts and begin reconstruction efforts quickly contributed to DOD’s using other than full and open competition during the initial stages of reconstruction. While full and open competition can be a tool to mitigate acquisition risks, DOD procurement officials had only a relatively short time—often only weeks—to award the first major reconstruction contracts. As a result, these contracts were generally awarded using other than full and open competition. We recently reported that DOD competed the vast majority of its contract obligations between October 1, 2003, through March 31, 2006. We were able to obtain data on $7 billion, or 82 percent, of DOD’s total contract obligations during this period. Our ability to obtain complete information, however, on DOD reconstruction contract actions was limited because not all DOD components consistently tracked or fully reported this information. Since the mid-1990s, our reports have highlighted the need for clear and comprehensive guidance for managing and overseeing the use of contractors that support deployed forces. As we reported in December 2006, DOD has not yet fully addressed this long-standing problem. Such problems are not new. In assessing LOGCAP implementation during the Bosnian peacekeeping mission in 1997, we identified weaknesses in the available doctrine on how to manage contractor resources, including how to integrate contractors with military units and what type of management and oversight structure to establish. We identified similar weaknesses when we began reviewing DOD’s use of contractors in Iraq. For example, in 2003 we reported that guidance and other oversight mechanisms varied widely at the DOD, combatant command, and service levels, making it difficult to manage contractors effectively. Similarly, in our 2005 report on private security contractors in Iraq, we noted that DOD had not issued any guidance to units deploying to Iraq on how to work with or coordinate efforts with private security contractors. Further, we noted that the military may not have a clear understanding of the role of contractors, including private security providers, in Iraq and of the implications of having private security providers in the battle space. In our view, establishing baseline policies for managing and overseeing contractors would help ensure the efficient use of contractors in places such as Iraq. DOD addressed some of these issues when it issued new guidance in October 2005 on the use of contractors who support deployed forces. However, as our December 2006 report made clear, DOD’s guidance does not address a number of problems we have repeatedly raised—such as the need to provide adequate contract oversight personnel, to collect and share lessons learned on the use of contractors supporting deployed forces, and to provide DOD commanders and contract oversight personnel with training on the use of contractors overseas before deployment. Since our December 2006 report was issued, DOD officials indicated that DOD was developing a joint publication entitled Contracting and Contractor Management in Joint Operations, which is expected to be distributed in May 2007. Our work has also highlighted the need for DOD components to comply with departmental guidance on the use of contractors. For example, in our June 2003 report we noted that DOD components were not complying with a long-standing requirement to identify essential services provided by contractors and develop backup plans to ensure the continuation of those services during contingency operations should contractors become unavailable to provide those services. Other reports highlighted our concerns over DOD’s planning for the use of contractor support in Iraq, including the need to comply with guidance to identify operational requirements early in the planning process. When contractors are involved in planning efforts early and given adequate time to plan and prepare to accomplish their assigned tasks, the quality of the contractor’s services improves and contract costs may be lowered. DOD’s October 2005 guidance on the use of contractor support to deployed forces went a long way to consolidate existing policy and provide guidance on a wide range of contractor issues. However, as of December 2006, we found little evidence that DOD components were implementing that guidance, in part because no individual within DOD was responsible for reviewing DOD’s and the services’ efforts to ensure the guidance was being consistently implemented. In our 2005 report on LOGCAP we recommended DOD designate a LOGCAP coordinator with the authority to participate in deliberations and advocate the most effective and efficient use of the LOGCAP contract. Similarly, in 2006 we recommended that DOD appoint a focal point within the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics—at a sufficiently senior level and with the appropriate resources—dedicated to leading DOD’s efforts to improve its contract management and oversight. DOD agreed with these recommendations. In October 2006, DOD established the office of the Assistant Deputy Under Secretary of Defense for Program Support to serve as the office of primary responsibility for contractor support issues, but the office’s specific roles and responsibilities have not yet been clearly defined. GAO has reported on numerous occasions that DOD did not adequately manage and assess contractor performance to ensure that the business arrangement was properly executed. Managing and assessing post-award performance entails various activities to ensure that the delivery of services meets the terms of the contract and requires adequate surveillance resources, proper incentives, and a capable workforce for overseeing contracting activities. If surveillance is not conducted, not sufficient, or not well documented, DOD is at risk of being unable to identify and correct poor contractor performance in a timely manner and potentially paying too much for the services it receives. Our work has found, however, that DOD is often at risk. In March 2005, for example, we reported instances of inadequate surveillance on 26 of 90 DOD service contracts we reviewed. In each instance, at least one of the key factors to ensure adequate surveillance did not take place. These factors are (1) training personnel in how to conduct surveillance, (2) assigning personnel at or prior to contract award, (3) holding personnel accountable for their surveillance duties, and (4) performing and documenting surveillance throughout the period of the contract. Officials we met with during our review expressed concerns about support for surveillance. The comments included those of Navy officials who told us that surveillance remains a part-time duty they did not have enough time to undertake and, consequently, was a low-priority task. More recently, in December 2006 we reported that DOD does not have sufficient numbers of contractor oversight personnel at deployed locations, which limits its ability to obtain reasonable assurance that contractors are meeting contract requirements efficiently and effectively. For example, an Army official acknowledged that the Army is struggling to find the capacity and expertise to provide the contracting support needed in Iraq. A LOGCAP program official noted that if adequate staffing had been in place, the Army could have realized substantial savings on the LOGCAP contract through more effective reviews of new requirements. A Defense Contract Management Agency official responsible for overseeing the LOGCAP contractor’s performance at 27 locations noted that he was unable to visit all of those locations during his 6-month tour to determine the extent to which the contractor was meeting contract requirements. The lack of visibility on the extent of services provided by contractors to deployed forces contributes to this condition. Without such visibility, senior leaders and military commanders cannot develop a complete picture of the extent to which they rely on contractors to support their operations. We first reported the need for better visibility in 2002 during a review of the costs associated with U.S. operations in the Balkans. At that time, we reported that DOD was unaware of (1) the number of contractors operating in the Balkans, (2) the tasks those contractors were contracted to do, and (3) the government’s obligations to those contractors under the contracts. We noted a similar situation in 2003 in our report on DOD’s use of contractors to support deployed forces in Southwest Asia and Kosovo. Our December 2006 review of DOD’s use of contractors in Iraq found continuing problems with visibility over contractors. For example, when senior military leaders began to develop a base consolidation plan, officials were unable to determine how many contractors were deployed and therefore ran the risk of over- or under-building the capacity of the consolidated bases. DOD’s October 2005 guidance on contractor support to deployed forces included a requirement that the department develop or designate a joint database to maintain by-name accountability of contractors deploying with the force and a summary of the services or capabilities contractors provide. The Army has taken the lead in this effort, and recently DOD designated a database intended to provide improved visibility over contractors deployed to support the military in Iraq, Afghanistan, and elsewhere. According to DOD, in January 2007, the department designated the Army’s Synchronized Predeployment & Operational Tracker (SPOT) as the departmentwide database to maintain by-name accountability of all contractors deploying with the force. According to DOD, the SPOT database includes approximately 50,000 contractor names. Additionally, in December 2006, the Defense Federal Acquisition Regulation Supplement was amended to require the use of the SPOT database by contractors supporting deployed forces. In January 2005, we identified management of interagency contracts as a high-risk area because of their rapid growth, limited expertise of users and administrators, and unclear lines of accountability. Since DOD is the largest user of interagency contracts in the government, it can ill-afford to expose itself to such risks. Relying on other agencies for contracting support requires sound practices. For example, under an interagency arrangement, the number of parties in the contracting process increases, and ensuring the proper use of these contracting arrangements must be viewed as a shared responsibility that requires agencies to define clearly who does what in the contracting process. However, the problems I discussed previously regarding defining requirements, ensuring competition, and monitoring contractor performance are frequently evident in interagency contracting. Additionally, DOD pays a fee to other agencies when using their contracts or contracting services, which could potentially increase DOD costs. Our work, as well as that of the Inspectors General, found competition- related issues on DOD’s use of interagency contracting vehicles. DOD is required to foster competition and provide all contractors a fair opportunity to be considered for each order placed on GSA’s multiple- award schedules, unless certain exceptions apply. DOD officials, however, have on numerous occasions avoided the time and effort necessary to award individual orders competitively and instead awarded all the work to be performed to a single contractor. We found that this practice resulted in the noncompetitive award of many orders that have not always been adequately justified. In April 2005, we reported that a lack of effective management controls— in particular insufficient management oversight and a lack of adequate training—led to breakdowns in the issuance and administration of task orders for interrogation and other services in Iraq by the Department of the Interior on behalf of DOD. These breakdowns included: issuing 10 out of 11 task orders that were beyond the scope of underlying contracts, in violation of competition rules; not complying with additional DOD competition requirements when issuing task orders for services on existing contracts; not properly justifying the decision to use interagency contracting; not complying with ordering procedures meant to ensure best value for not adequately monitoring contractor performance. Because officials at Interior and the Army responsible for the orders did not fully carry out their responsibilities, the contractor was allowed to play a role in the procurement process normally performed by government officials. Further, the Army officials responsible for overseeing the contractor, for the most part, lacked knowledge of contracting issues and were not aware of their basic duties and responsibilities. In July 2005, we reported on various issues associated with DOD’s use of franchise funds at the departments of the Interior and the Treasury— GovWorks and FedSource—that acquired a range of services for DOD. For example, GovWorks did not receive competing proposals for work and added substantial work to the orders without determining that prices were fair and reasonable. FedSource generally did not ensure competition for work, did not conduct price analyses, and sometimes paid contractors higher prices for services than were specified in the contracts, with no justification in the contract files. At both funds, we found that the files we reviewed lacked clear descriptions of requirements the contractor was supposed to meet. For its part, DOD did not analyze contracting alternatives and lacked information about purchases made through these arrangements. We also found DOD and franchise fund officials were not monitoring contracts and lacked criteria against which contractor performance could be measured to ensure that contractors provided quality services in a timely manner. We identified several causes for the lack of sound practices. In some cases, there was a lack of clear guidance and contracting personnel were insufficiently trained on the use of interagency contracting arrangements. In many cases, DOD users chose the speed and convenience of an interagency contracting arrangement to respond and meet needs quickly. Contracting service providers, under a fee-for-service arrangement, sometimes inappropriately emphasized customer satisfaction and revenue generation over compliance with sound contracting policies and procedures. These practices put DOD at risk of not getting required services at reasonable prices and unnecessarily wasting resources. Further, DOD does not have useful information about purchases made through other agencies’ contracts, making it difficult to assess the costs and benefits and make informed choices about the alternatives methods available. Similarly, the DOD Inspector General recently reported on issues with DOD’s use of contracts awarded by the departments of the Interior and the Treasury, GSA, and the National Aeronautics and Space Administration (NASA). For example, in November 2006, the Inspector General reported that DOD contracting and program personnel did not comply with acquisition rules and regulations when using contracts awarded by NASA, such as not always complying with fair opportunity requirements or not adequately justifying the use of a non-DOD contracting vehicle. As a result, the Inspector General concluded that funds were not used as intended by Congress, competition was limited, and DOD had no assurance that it received the best value. Additionally, the Inspector General found that DOD used Interior and GSA to “park” funds that were expiring. The agencies then subsequently placed contracts for DOD using the expired funds, thereby circumventing appropriations law. The Inspector General concluded that these problems were driven by a desire to hire a particular contractor, the desire to obligate expiring funds, and the inability of the DOD contracting workforce to respond to its customers in a timely manner. DOD and other agencies have taken steps to address some of these issues, including issuing an October 2006 memorandum intended to strengthen internal controls over the use of interagency contracts and signing a December 2006 memorandum of understanding with GSA to work together on 22 basic contracting management controls, including ensuring that sole-source justifications are adequate, that statements of work are complete, and that interagency agreements describe the work to be performed. Similarly, GSA has worked with DOD to identify unused and expired DOD funds maintained in GSA accounts. Further, according to the Inspector General, Interior has withdrawn numerous warrants in response to these findings. Congress and GAO have identified the need to improve DOD’s overall approach to acquiring services for several years. In 2002, we noted that DOD’s approach to buying services was largely fragmented and uncoordinated. Responsibility for acquiring services was spread among individual military commands, weapon system program offices, or functional units on military bases, and with little visibility or control at the DOD or military department level. Despite taking action to address the deficiencies and implement legislative requirements, DOD’s actions to date have not equated with progress. DOD’s current approach to acquiring services suffers from the absence of key elements at the strategic and transactional levels and does not position the department to make service acquisitions a managed outcome. Considerable congressional effort has been made to improve DOD’s approach to acquiring services. For example, in 2001, Congress passed legislation to ensure that DOD acquires services by means that are in the best interest of the government and managed in compliance with applicable statutory requirements. In this regard, sections 801 and 802 of the National Defense Authorization Act for Fiscal Year 2002 required DOD to establish a service acquisition management approach, including developing a structure for reviewing individual service transactions based on dollar thresholds and other criteria. Last year, Congress amended requirements pertaining to DOD’s service contracting management structure, workforce, and oversight processes, among others. We have issued several reports that identified shortcomings in DOD’s approaches and its implementation of legislative requirements. For example, we issued a report in January 2002 that identified how leading commercial companies took a strategic approach to buying services and recommended that DOD evaluate how a strategic reengineering approach, such as that employed by leading companies, could be used as a framework to guide DOD’s reengineering efforts. In September 2003, we reported that DOD’s actions to implement the service acquisition management structure required under Sections 801 and 802 did not provide a departmentwide assessment of how spending for services could be more effective and recommended that DOD give greater attention to promoting a strategic orientation by setting performance goals for improvements and ensuring accountability for achieving those results. Most recently, in November 2006, we issued a report that identified a number of actions that DOD could take to improve its acquisition of services. We noted that DOD’s overall approach to managing services acquisitions suffered from the absence of several key elements at both a strategic and transactional level. The strategic level is where the enterprise, DOD in this case, sets the direction or vision for what it needs, captures the knowledge to enable more informed management decisions, ensures departmentwide goals and objectives are achieved, determines how to go about meeting those needs, and assesses the resources it has to achieve desired outcomes. The strategic level also sets the context for the transactional level, where the focus is on making sound decisions on individual service acquisitions. Factors for good outcomes at the transactional level include valid and well-defined requirements, appropriate business arrangements, and adequate management of contractor performance. DOD’s current approach to managing the acquisition of services tended to be reactive and did not fully addressed the key factors for success at either the strategic or the transactional level. At the strategic level, DOD had not developed a normative position for gauging whether ongoing and planned efforts can best achieve intended results. Further, DOD lacked good information on the volume and composition of services, perpetuating the circumstance in which the acquisition of services tended to happen to DOD, rather than being proactively managed. For example, despite implementing a review structure aimed at increasing insight into service transactions, DOD was not able to determine which or how many transactions had been reviewed. The military departments had only slightly better visibility, having reviewed proposed acquisitions accounting for less than 3 percent of dollars obligated for services in fiscal year 2005. Additionally, most of the service acquisitions the military services review involved indefinite delivery/indefinite quantity contracts. DOD’s policy for managing service acquisitions had no requirement, however, to review individual task orders that were subsequently issued even if the value of the task order exceeded the review threshold. Further, the reviews tended to focus more on ensuring compliance with applicable statutes, regulations, and other requirements, rather than on imparting a vision or tailored method for strategically managing service acquisitions. Our discussions with officials at buying activities that had proposed service acquisitions reviewed under this process revealed that, for the most part, officials did not believe the review significantly improved those acquisitions. These officials indicated that the timing of the review process—which generally occurred well into the planning cycle—was too late to provide opportunities to influence the acquisition strategy. These officials told us that the reviews would be more beneficial if they were conducted earlier in the process, in conjunction with the program office or customer, and in the context of a more strategic approach to meeting the requirement, rather than simply from a secondary or tertiary review of the contract. At the transactional level, DOD tended to focus primarily on those elements associated with awarding contracts, with much less attention paid to formulation of service acquisition requirements and to assessment of the actual delivery of contracted services. Moreover, the results of individual acquisitions were generally not used to inform or adjust strategic direction. As a result, DOD was not in a position to determine whether investments in services are achieving their desired outcomes. Further, DOD and military department officials identified many of the same problems in defining requirements, establishing sound business arrangements, and providing effective oversight that I discussed previously, as the following examples show: DOD and military department officials consistently identified poor communication and the lack of timely interaction between acquisition and contracting personnel as key challenges to developing good requirements. An Army contracting officer issued a task order for a product that the contracting officer knew was outside the scope of the service contract. The contracting officer noted in an e-mail to the requestor that this deviation was allowed only because the customer needed the product quickly and cautioned that no such allowances would be granted in the future. Few of the commands or activities could provide us reliable or current information on the number of service acquisitions they managed, and others had not developed a means to consistently monitor or assess, at a command level, whether such acquisitions were meeting the performance objectives established in the contracts. To address these issues, we made several recommendations to the Secretary of Defense. DOD concurred with our recommendations and identified actions it has taken, or plans to take, to address them. In particular, DOD noted that it is reassessing its strategic approach to acquiring services, including examining the types and kinds of services it acquires and developing an integrated assessment of how best to acquire such services. DOD expects this assessment will result in a comprehensive, departmentwide architecture for acquiring services that will, among other improvements, help refine the process to develop requirements, ensure that individual transactions are consistent with DOD’s strategic goals and initiatives, and provide a capability to assess whether service acquisitions are meeting their cost, schedule, and performance objectives. In closing, I would like to emphasize that DOD has taken, or is in the process of taking, action to address the issues we identified. These actions, much like the assessment I just mentioned, however, will have little meaning unless DOD’s leadership can translate its vision into changes in frontline practices. In our July 2006 report on vulnerabilities to fraud, waste, and abuse, we noted that leadership positions are sometimes vacant, that the culture to streamline acquisitions for purposes of speed may have not been balanced with good business practices, and that even in newly formed government-industry partnerships, the government needs to maintain its oversight responsibility. Understanding the myriad causes of the challenges confronting DOD in acquiring services is essential to developing effective solutions and translating policies into practices. While DOD has generally agreed with our recommendations intended to improve contract management, much remains to be done. At this point, DOD does not know how well its services acquisition processes are working, which part of its mission can best be met through buying services, and whether it is obtaining the services it needs while protecting DOD’s and the taxpayer’s interests. Mr. Chairman and members of the subcommittee, this concludes my testimony. I would be happy to answer any questions you might have. In preparing this testimony, we relied principally on previously issued GAO and Inspectors General reports. We conducted our work in May 2007 in accordance with generally accepted government auditing standards. For further information regarding this testimony, please contact John P. Hutton at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs can be found on the last page of this testimony. Key contributors to this testimony were Theresa Chen, Timothy DiNapoli, Kathryn Edelman, and John Krump. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Department of Defense (DOD) is relying more and more on contractors to provide billions of dollars in services. Congress has pushed DOD to employ sound business practices when using the private sector for services. This testimony discusses DOD's (1) increasing reliance on contractors; (2) efforts to follow sound business practices when acquiring services; and (3) actions to improve its management and oversight of services. This testimony is based on GAO's work spanning several years as well as recent reports issued by the Inspectors General. Over the past decade, DOD has increasingly relied on contractors to provide a range of mission-critical services from operating information technology systems to providing logistical support on the battlefield. The growth in spending on services clearly illustrates this point. DOD's obligations on service contracts, expressed in constant fiscal year 2006 dollars, rose from $85.1 billion in fiscal year 1996 to more than $151 billion in fiscal year 2006, a 78 percent increase. While obligations increased, the size of the civilian workforce decreased. Moreover, DOD carried out this downsizing without ensuring that it had the requisite skills and competencies needed to manage and oversee service acquisitions. Overall, our work found that to a large degree, this growth in spending on services simply happened and was not a managed outcome. The lack of sound business practices--poorly defined requirements, inadequate competition, the lack of comprehensive guidance and visibility on contractors supporting deployed forces, inadequate monitoring of contractor performance, and inappropriate use of other agencies' contracts and contracting services--expose DOD to unnecessary risk, waste resources, and complicate efforts to hold contractors accountable for poor service acquisition outcomes. For example, DOD awarded contracts for security guard services supporting 57 domestic bases, 46 of which were done on an authorized, sole-source basis. The sole-source contracts were awarded by DOD despite recognizing it was paying about 25 percent more than previously paid for contracts awarded competitively. Further, the lack of sufficient surveillance on service contracts placed DOD at risk of being unable to identify and correct poor contractor performance in a timely manner and potentially paying too much for the services it receives. Overall, DOD's management structure and processes overseeing service acquisitions lacked key elements at the strategic and transactional levels. DOD has taken some steps to improve its management of services acquisition, including developing a competency model for its contracting workforce; issuing policies and guidance to improve its management of contractors supporting deployed forces and its use of interagency contracts; and developing an integrated assessment of how best to acquire services. DOD leadership will be critical for translating this assessment into policy and, most importantly, effective frontline practices. At this point, DOD does not know how well its services acquisition processes are working, which part of its mission can best be met through buying services, and whether it is obtaining the services it needs while protecting DOD's and the taxpayer's interests.
Within DOD’s overall acquisition framework, there are three key decision- support processes—the acquisition management system, requirements determination, and resource allocation—that must work closely together for acquisition programs to successfully deliver the right weapon systems at the right time and right price. Each process is managed and overseen by different organizations and leaders within DOD and the military departments. At the DOD level, the Under Secretary of Defense for Acquisition, Technology, and Logistics (USD (AT&L)) is responsible for the acquisition function and is the Milestone Decision Authority (MDA) for major defense acquisition programs, whereas the Joint Chiefs of Staff are responsible for implementing the requirements process, and the Under Secretary of Defense (Comptroller) is responsible for the resource process. At the military department level, where programs are largely planned and executed, the civilian service acquisition executive is responsible for the acquisition process, while the service chiefs have responsibility for assisting the military departments in the development of requirements and the resourcing processes. We have previously found that these processes are fragmented, making it difficult for the department to achieve a balanced mix of weapon systems that are affordable and feasible and provide the best military value to the warfighter. In recent years, Congress and DOD have taken steps to better integrate the requirements and acquisition processes. For example, the department added new decision points and reviews for weapon programs as they progress through the acquisition process. Additionally, USD (AT&L) now serves as an advisor to the council that reviews requirements for major weapon programs. Furthermore, the Fiscal Year 2011 National Defense Authorization Act delineated that the service chiefs have a responsibility to assist the secretaries of the military departments concerned in carrying out the acquisition function. Generally, major defense acquisition programs go through a series of phases as they progress from the identification of the need for a new capability, through initial planning of a solution, to system development, and finally production and deployment of a fielded system. High-level, operational requirements of major weapon systems are first generated, vetted, and put forward for DOD-level review and approval, generally by the military services. These requirements are prioritized based on how critical the associated system characteristics are to delivering the military capability. Key performance parameters are considered most critical by the sponsor military organization, while key system attributes and other performance attributes are considered essential for an effective military capability. Through systems engineering efforts, these high-level requirements must then be translated into lower-level technical requirements and specifications to design and build the weapon system. Figure 1 illustrates the notional types and levels of requirements for weapon system development. Following military service-level reviews and approvals, the high-level operational requirements, which are specified in a capability development document, go through several key stages where DOD-level reviews and validations are required, a process accomplished for joint military requirements within the department’s Joint Capabilities Integration and Development System (JCIDS) process. Capability requirements documents for these programs are assessed and validated within JCIDS by the Chairman of the Joint Chiefs of Staff with the advice of the Joint Requirements Oversight Council (JROC), which is chaired by the Vice Chairman of the Joint Chiefs of Staff and is comprised of the Vice Chiefs of Staff of each military service and the Combatant Commanders, when directed by the chairman. These high-level requirements along with several other acquisition-related analyses and documents (e.g., acquisition strategy, cost estimates, and test and evaluation plan) are required for approval at Milestone B, when an acquisition program formally starts system development. As major defense acquisition programs go through the iterative phases of the acquisition process, they are reviewed by the Defense Acquisition Board, which is chaired by USD (AT&L) and includes the secretaries of the military departments and other senior leaders. However, prior to these DOD-level reviews, programs have reviews and approvals at the military service level where the service acquisition executives and service chiefs are involved. In our prior report on the acquisition chain of command, we found that service chiefs and their supporting offices have multiple opportunities to be involved in major defense acquisition programs throughout the acquisition process, including participation in integrated product teams, service-level reviews during system development, and requirements review and approval prior to a program’s production decision. Figure 2 illustrates DOD’s current acquisition process and where the military service chiefs and service acquisition executives have primary responsibilities. Generally, after Milestone B, when system development begins in earnest, the chief’s role diminishes whereas the service acquisition executive’s role becomes more prominent. For more than a decade, we have recommended numerous actions to improve the way DOD acquires its defense systems. Our work in commercial best practices and defense acquisitions has consistently found that, at the program level, a key cause of poor program outcomes is the approval of programs with business cases that contain inadequate knowledge about requirements and the resources—funding, time, technologies, and people—needed to execute them.programs run into problems during system development because requirements are unrealistic, technologies are immature, cost and schedule are underestimated, and design and production risks are high. Some key recommendations that we have made in the past to improve DOD’s acquisition process include the following: Require that systems engineering that is needed to evaluate the sufficiency of available resources be conducted before weapon system requirements are formalized; Require, as a condition for starting a new weapon system program, that sufficient evidence exists to show there is a match between a weapon’s system requirements and the resources the program manager has to develop that weapon; Require program officials to demonstrate that they have captured appropriate knowledge at program start (Milestone B), which includes ensuring that requirements for the product are informed by the systems engineering process, and establishing cost and schedule estimates on the basis of knowledge from preliminary design using system engineering tools; Have contractors perform more detailed systems engineering analysis to develop sound requirements before DOD selects a prime contractor for the systems development contract; and Define a shipbuilding approach that calls for (1) demonstrating balance among program requirements, technology demands, and cost considerations by preliminary design review, and (2) retiring technical risk and closing any remaining gaps in design requirements before a contract for detail design is awarded. GAO, Defense Acquisitions: Major Weapon Systems Continue to Experience Cost and Schedule Problems under DOD’s Revised Policy, GAO-06-368 (Washington, D.C.: Apr. 13, 2006). Most current and former military service chiefs that we interviewed collectively expressed dissatisfaction with the current acquisition process and the outcomes it produces. They were concerned that after validated requirements are handed over to the acquisition process, requirements are frequently added or changed to increase the scope and capabilities of a weapon system. Some current and former service chiefs said that because they lack visibility into programs, they are unable to influence trade-offs between requirements and resources. However, their views differed on how best to be more involved in the management of acquisitions and improve the integration between DOD’s requirements and acquisition functions. Most of the current and former service chiefs that we interviewed were dissatisfied with the current acquisition process and stated that programs often fail to deliver needed operational capabilities to the warfighter with expected resources—such as technologies and funding—and in expected time frames. They were concerned that requirements are developed within a military service, validated by the JROC, handed over to the acquisition process and then, later on—years later—program cost, schedule, and performance problems materialize. According to a number of both current and former service chiefs, they are not always involved in the acquisition process and are frequently caught by surprise when these problems emerge. Several service chiefs saw a key factor contributing to this condition as unplanned requirements growth—sometimes referred to as “creep”—that occurs during program execution. Several current and former service chiefs expressed the view that, after a program is approved and system development is underway, requirements are frequently added or changed to increase the scope and capabilities of a weapon system beyond the requirements originally agreed upon when the program started. One current service chief cited an example where program officials unnecessarily created a lower-level requirement for an aircraft system that did not meet any validated operational need. The service chief attributed the problem, in part, to a lack of military officers with acquisition expertise and a corresponding absence of acquisition officials with operational expertise. A former DOD official pointed to the lengthy timeframe usually involved in developing major weapon systems and how requirements increases occur because programs want to introduce the latest technology advances into a system, such as information technology and electronics equipment. Some current and former service chiefs stated that because they lack visibility into programs, they are unable to influence trade-offs between requirements and resources. One current service chief provided an example when program officials, in an effort to meet a validated operational requirement for speed, were developing an engine that led to cost increases, while he believed there was an existing engine available that would have required a minor reduction in capability in favor of reducing cost. The service chiefs also had concerns that requirements growth is a function of too many stakeholders within DOD having the ability to influence acquisition programs, making it difficult to hold anyone accountable for program outcomes. Many of these service chiefs believed that cultural factors and incentives within the department make it difficult for program managers to manage requirements growth and execute programs effectively. These chiefs said that program managers and other acquisition officials often lack experience and expertise to manage requirements and acquisitions, are incentivized to meet internal milestones and not raise issues, and rely too much on contractors to figure out what is needed to develop a weapon system. Further, they noted that high turnover in program manager tenure—approximately every 2-3 years according to several service chiefs—make it difficult to hold managers accountable when problems emerge. To improve weapon system program outcomes, both current and former service chiefs agree that they should be more involved in the acquisition process. However, their views differed on the measures needed to achieve more involvement and improve integration between DOD’s requirements and acquisition functions. Most current service chiefs said that better collaboration does not require restructuring the chain of command. These service chiefs cited examples of ongoing collaboration between requirements and acquisition offices, and programs where they worked closely with acquisition leadership to address problems. In one case, a service chief pointed to an initiative that he and the service acquisition executive instituted to provide technical training and assistance to uniformed requirements officers as an example of formalized collaboration before and after the start of system development. Another service chief indicated that, faced with rising program costs and the possibility of cancellation, he actively monitored program progress through regular meetings with the program manager and contractor. Current service chiefs and other acquisition leadership generally indicated the service chiefs have the ability to be more involved in the current process, such as by attending service and DOD level program reviews. However, some chiefs indicated that involvement in acquisition programs, in general, varies by service chief based on their priorities and the other personalities involved. Several current and former service chiefs agreed that they have been involved in the oversight of some programs, but their level of involvement is dependent on the importance of the program and established working relationships with the service acquisition executive. One service chief stated that, at times, service chiefs have not been involved due to unfamiliarity with the acquisition process, their own perceived role in the process, or a lack of interest in an acquisition. Several former service chiefs thought that establishing co-chairmanship for key decision reviews and co-signature of key acquisition documents, particularly at the military department level, may improve collaboration, encourage requirements trade-offs during development, and force the service chiefs to share the burden of responsibility for acquisition programs. One suggestion from an outside expert for implementing this solution was to have the service chief and the service acquisition executive co-chair the service-level acquisition review board. Some military and acquisition leaders noted, however, that requiring co- chairmanship of acquisition meetings and co-signature of decision documents could slow an already complex process and further discourage program managers from raising issues and concerns. In general, the former service chiefs we interviewed emphasized the need for a stronger role in the acquisition chain of command with more formal authority and mechanisms in place to ensure that the service chiefs are consistently involved and sufficiently able to influence program decisions. However, as we found in our prior review, studies that have advocated for a stronger role for the service chiefs in the acquisition process provide Several of little evidence that this would improve program outcomes.these former service chiefs advocated for changes to DOD policy and statute, including the Goldwater-Nichols Act.service chief believed that DOD acquisition policy should require service chief approval on all major defense acquisition programs prior to program start. Some acquisition experts have observed that, in giving sole responsibility for acquisitions to the military secretaries through the service acquisition executives, DOD created an unintended wall when implementing the Goldwater-Nichols Act reforms between the military- controlled requirements process and civilian-driven acquisition process. These acquisition experts note, however, that while service chiefs had significant influence on certain acquisition programs in the past, their close involvement did not always result in successful cost, schedule, or performance outcomes. For example, service chiefs had significant involvement in the Navy’s Littoral Combat Ship and the Army’s Future Combat System and, in both cases, viewed the programs as providing vital operational capabilities and needing to be fielded quickly. Consequently, the programs pursued aggressive acquisition strategies that pushed the programs through development with ill-defined requirements and unstable designs, which contributed to significant cost and schedule increases, and in the case of the Future Combat System, program cancellation. Acting on the chiefs’ concerns, we analyzed all 78 major defense acquisition programs and found that growth in high-level requirements— and consequent cost growth—was rare. Rather, we found that cost growth and other problems are more directly related to deriving lower- level requirements after a program has started. The distinction between high-level and lower-level requirements is key. Growth in high-level requirements could be attributable to a lack of discipline, but growth in lower-level requirements is not the result of additions, but rather the definition and realization of the details necessary to meet the high-level requirements. The process of defining lower-level requirements is an essential function of systems engineering, much of which is done late— after a development contract has been signed and a program has started. In other words, requirements are insufficiently defined at program start; when their full consequences are realized, trade-offs are harder to make—cost increases and schedule delays become the preferred solutions. We presented our assessment of the requirements problem to current and former service chiefs and they generally agreed with it. Several service chiefs noted that more integration, collaboration, and communication during the requirements and acquisition processes needs to take place to ensure that trade-offs between desired capabilities and expected costs are made and that requirements are essential, technically feasible, and affordable before programs get underway. Some service chiefs believed that applying systems engineering to arrive at well-defined requirements before the start of system development at Milestone B can go a long way towards solving some of their dissatisfaction with the acquisition process and improving outcomes. We found few instances of requirements changes between 2009 and 2013 that involved increasing capabilities on major defense programs during system development. Seventeen programs in the current portfolio of 78 major defense acquisition programs experienced system development cost growth of more than 20 percent between 2009 and 2013, but 13 of them did not report associated key requirements increases (see table 1). A number of factors other than requirements increases contributed to the cost growth in these programs. We found that, within the current portfolio of major defense acquisition programs, 5 of 78 programs reported increases to key performance parameters between 2009 and 2013. In these 5 programs, the changes involved adding a new component, technology, or other subsystem to increase the capabilities of the weapon system. Table 2 describes the requirement changes reported by these 5 programs. In 4 programs, development cost increases were more than 20 percent during the same time period. A key factor consistently identified by GAO in prior reports is the mismatch between the requirements for a new weapon system and the resources—technologies, time, and funding—that are planned to develop Requirements, especially at the lower levels, are often the new system.not fully developed or well-defined when passed over to the acquisition process at Milestone B, at which time a system development contract is awarded and a program begins. During system development, the high- level operational requirements, such as key performance parameters and key system attributes, usually need to be further analyzed by the contractor using systems engineering techniques to fully understand, break down, and translate them into technical weapon system-level requirements and contract specifications. Systems engineering analysis translates operational requirements into detailed system requirements for which requisite technological, software, engineering, and production capabilities have been identified. It also provides knowledge to enable the developer to identify and resolve gaps before system development begins. It is often at this point—when the technical specifications are finally understood and design challenges are recognized—that cost and schedule increases materialize in a program. What may appear to be requirements growth is the recognition that the weapon system will require considerably more time and money than expected to build to these derived technical specifications to meet the validated operational requirements. The process of translating high-level operational requirements into low- level requirements and technical specifications in many programs does not usually occur until well after Milestone B approval (see figure 3 for a notional depiction). The number of requirements can expand greatly over time, as the designs of the subsystems and components become defined. In the case of the Army’s Future Combat System, a large program that was intended to equip combat brigades with an advanced set of integrated systems, requirements were still being defined when the program was canceled beginning in 2009—after 6 years and $18 billion had been spent on initial system development. The program was approved to start system development with 7 key performance parameters. In order to meet these key performance parameters—which did not change—the program ultimately translated them into over 50,000 lower-level requirements before it was canceled. Requirements definition remains a challenge facing current major defense acquisition programs. For example, the F-35 program, which was conceptualized around three aircraft design variants to achieve cost efficiencies, has had difficulty reconciling different requirements imposed by the military services. According to program officials, in order to meet the nine validated key performance parameters, the program developed approximately 3,600 specifications. While the operational requirements for the F-35 have not increased, factors such as poorly defined requirements, significant concurrency between development and production, and immature technologies have contributed to significant cost growth and delays in the program. We found that several of the major defense acquisition programs that experienced cost growth, but did not report changing key performance parameters, had a significant number of engineering change orders and other configuration changes. As operational requirements become better understood during system development, contract specifications change to reflect what is needed to build the weapon system. Changes show up in engineering change orders and other design configuration changes, which contribute to cost growth. For example, between 2009 and 2013, the Littoral Combat Ship program reported 487 changes to its system configuration or design. Similarly, the Joint Tactical Radio System Handheld, Manpack, and Small Form Fit Radios program reported making 29 engineering changes and 11,573 software changes between 2009 and 2013. In neither case were the high-level requirements increased. While some configuration changes are necessary to manage obsolescence and other issues, the pursuit of poorly defined requirements results in overly optimistic cost and schedule estimates that are sometimes unachievable—leading to cost and schedule growth as programs encounter increased technical challenges necessary to achieve operational requirements. GAO’s prior work as well as DOD’s own policy emphasizes that the translation of operational requirements into technical weapon system specifications, which are informed by systems engineering, should take place prior to approving a program at Milestone B and awarding a contract that locks in the requirements. This allows trade-offs between requirements and resources to take place, and the establishment of more realistic cost, schedule, and performance commitments before programs get underway. However, DOD often does not perform sufficient up-front requirements analysis via systems engineering on programs to determine whether the requirements are feasible and there is a sound business case to move forward. Programs are proposed with unachievable requirements and overly optimistic cost and schedule estimates and, usually, participants on both the requirements side and the acquisition side are loathe to trade away performance. For example, a preliminary design review is a key systems engineering event that should be held before the start of system development to ensure requirements are defined and feasible, and the proposed design can meet the requirements within cost, schedule, and other system constraints. In 2013, GAO reviewed the 38 major defense acquisition programs that held preliminary design reviews that year. Only 11 of these programs held design reviews prior to the start of system development. The remaining 27 programs completed or planned to complete their design reviews approximately 24 months, on average, after the start of development. Thus, the resource consequences of deriving lower-level requirements are similarly deferred. We shared a summary of our assessment of the requirements problem, namely that high-level requirements are poorly defined when passed over to the acquisition process at the start of development, with the current and former service chiefs and they generally agreed with our findings. Several current and former service chiefs indicated that requirement and resource trade-offs, informed by systems engineering, do not consistently take place before programs get underway. Some chiefs also noted that reassessments of requirements, acquisition, and funding are not conducted often enough during program execution. According to one service chief, under the current acquisition process, there are too few points of collaboration among requirements officers, acquisition professionals, systems engineers, and cost estimators to work out requirements early in the process or to address problems and limitations associated with meeting operational requirements after programs are underway. Another service chief noted that the acquisition workforce lacks experience in operational and tactical settings and that his requirements community lacks technical acquisition skills, so it is important that collaboration regularly occurs between the two communities. Further, one chief emphasized that requirements officers are too dependent upon the acquisition community and its contractors to work out requirements. Several current and former service chiefs voiced concern that cost and schedule problems that acquisition programs experience are due to the failure to make appropriate trade-offs during system development. They indicated that too often programs encounter cost and schedule problems because in striving to meet challenging requirements the programs end up making technical and design changes to the weapon system. For example, one former service chief highlighted a combat vehicle program in development which had fallen short of meeting its vehicle speed requirement by a small percentage. Instead of making trade-offs, and perhaps seeking requirements relief, the program manager requested additional funding so the contractor could make design changes to the engine. Another service chief stated that requirement changes made during weapon system development are often viewed as sacrificing capability rather than reconciling requirements with operational conditions. The chief was concerned that program managers too often take the view that requirements cannot be changed and avoid elevating problems to leadership before they become critical, forgoing the opportunity to make needed trade-offs. In addition, one service chief described this problem as “cost creep” to meet requirements, not “requirements creep”. We have previously found that incentives within the current acquisition process create pressure on defense system requirements and are geared toward delaying knowledge so as not to jeopardize program funding. Several current and former service chiefs agreed that there needs to be more integration, collaboration, and communication during the requirements and acquisition processes to ensure trade-offs are made and the requirements that get approved are essential, technically feasible, and affordable prior to the start of system development. Some service chiefs said that conducting systems engineering analyses during requirements setting and, again, early on during an acquisition program’s planning phase to inform trade-offs between cost and capability could go a long way toward establishing better defined requirements and improving program outcomes. Almost all of the service chiefs stated that there is a need to further enhance expertise within the government, and several specified expertise in systems engineering. Several service chiefs indicated that systems engineering capabilities are generally lacking in the requirements development process, and do not become available until after requirements are validated and an expensive and risky system development program is underway. Some service chiefs advocated that having systems engineering capabilities available to the military services during requirements development could help to ensure earlier assessment of requirements feasibility. The service chiefs’ views on the importance of systems engineering is consistent with our prior acquisition work, which calls for DOD to implement a knowledge-based approach to guide the match of defense program needs with available technology and resources. The service chiefs expressed a willingness to be more involved in the management and oversight of acquisition programs. Enhancing collaboration between the requirements and acquisition processes could be one of several steps needed to address the underlying culture and incentives that exist in DOD that lead to programs that are not feasible and affordable. We have found in prior work that characteristics of DOD’s processes and incentives create pressure to push for unrealistic defense system requirements and lead to poor decisions and mismatches This culture has become between requirements and resources. ingrained over several decades and a number of studies and reforms have been directed at changing the incentives underlying the culture, without much success. GAO-01-288. be overly optimistic and to minimize the difficulty and resources needed to deliver the capability. Many of our prior recommendations have been aimed at this problem and, while one could argue whether more formal authority should be granted to the service chiefs, the current acquisition process allows for the service chiefs to be more involved in the management and oversight of acquisition programs. Regardless, the solution must involve investing in systems engineering expertise sooner— while developing requirements—to enable technological knowledge to better shape and define operational requirements. Recommendations such as holding preliminary design reviews before the start of system development have been made as a means to improve program outcomes. After initial support, the enthusiasm for these practices wanes and the old pressures to continue with insufficient knowledge prevail, because the old practices allow programs to proceed and funding to flow. Importantly, the negative consequences of proceeding with limited knowledge are not sufficient to counteract these pressures, as accountability for the initial poor decisions is lost by the time problems emerge. Information and expertise will not result in good outcomes unless the need for a solid business case is reinforced. In order to improve program outcomes, DOD must focus its efforts on better integrating the requirements and acquisition processes, which can be achieved through better collaboration between these communities from the generation of requirements through system development, coupled with a greater emphasis on systems engineering and knowledge attainment early in a program’s life cycle. Without sufficient systems engineering input to better define requirements and examine trade-offs early on, there is no assurance that acquisition programs going forward have a sound basis to start system development. To help ensure that requirements are well defined and well understood before a program is approved to start system development, we recommend that the Secretary of Defense direct the military service chiefs and service acquisition executives to work together to take the following two actions: Assess whether sufficient systems engineering expertise is available during the requirements development process; and Develop a better way to make sure sufficient systems engineering is conducted and opportunities exist to better define requirements and assess resource trade-offs before a program starts. DOD provided us with written comments on a draft of this report, which are reprinted in appendix II. The department concurred with both of our recommendations, stating that the early application of systems engineering expertise and ensuring the availability of appropriately skilled personnel are critical to successful program outcomes. DOD noted that recent changes to department-wide policies, such as DOD Instruction 5000.02, strengthen the department’s focus on conducting systems engineering and making trade-offs during requirements development and pre-program planning. DOD further agreed that continuing to improve engagement between the requirements and acquisition communities will result in better informed program initiation and resourcing decisions. We are encouraged that DOD agrees with our recommendations and has recently taken steps to strengthen its policies and identify the need for early systems engineering. However, for many years DOD policies have emphasized the importance of a knowledge-based approach to acquiring weapon systems, but practice does not always follow policy. Instead, incentives exist that encourage deviation from sound policies and practices. We believe that DOD must focus on achieving better collaboration between the requirements and acquisition communities such as by ensuring that more systems engineering and other expertise are applied when requirements are being defined. It is through informed collaboration that knowledge will be attained, trade-offs between requirements and resources can be made earlier, and acquisition programs will begin development with realistic cost and schedule estimates, ultimately leading to improved outcomes. We are sending copies of this report to appropriate congressional committees; the Secretary of Defense; the Secretaries of the Air Force, Army, and Navy; the Chief of Staff of the Air Force; the Chief of Staff of the Army; the Chief of Naval Operations; the Commandant of the Marine Corps; and the Under Secretary of Defense for Acquisition, Technology, and Logistics. In addition, this report also is available at no charge on the GAO website at http://www.gao.gov. If you have any questions about this report or need additional information, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. GAO issued a report in 2014 on the military service chiefs’ role in the acquisition chain of command. This report reviews further related issues and concerns the military service chiefs have with the Department of Defense’s (DOD) acquisition process and outcomes it produces. Specifically, we examined (1) the views of current and former military service chiefs on the current acquisition process, and (2) key problems or factors the service chiefs identified with the acquisition process and our assessment of these issues. To obtain the views of current and former military service chiefs on the current acquisition process, we conducted interviews with 12 current and former military service chiefs and vice chiefs between August and December 2014. We met with all current military service chiefs and vice chiefs as of September 2014, including the Chief and Vice Chief of Staff of the Air Force, the Chief and Vice Chief of Staff of the Army, the Chief and Vice Chief of Naval Operations, and the Commandant and Assistant Commandant of the Marine Corps. These individuals possessed joint- and service-level experience, including positions as the Chairman of the Joint Chiefs of Staff and Combatant Commander. In December 2014, after completing our interviews, we analyzed our findings and sent a summary to the four current and three former military service chiefs, but not the vice chiefs, for their review and comment. We received responses from five of the seven service chiefs, all of whom concurred with our findings. We reviewed any comments and made changes to the summary document, as appropriate. We also interviewed or sought the perspectives of additional current and former DOD leadership, including the Service Acquisition Executive of the Air Force, Army, and Navy; officials from the Office of the Secretary of Defense, Joint Staff; and another former member of the Joint Chiefs of Staff. We analyzed evidence and examples collected from our interviews with current and former military service chiefs and DOD leadership. We also reviewed findings from existing reports and compendiums focused on the acquisition chain of command and interviewed acquisition subject matter experts to discuss the current acquisition process, the role of the military service chiefs in the acquisition chain of command, and potential solutions to improve program outcomes. We reviewed prior GAO work on weapon system acquisition and commercial best practices and analyzed the extent to which evidence exists that would demonstrate that these potential solutions may improve program outcomes. To assess key problems or factors the service chiefs identified with the acquisition process and our assessment of these issues, we drew upon our extensive body of work in defense acquisitions and best practices, and reviewed program execution information from ongoing major defense acquisition programs. We reviewed the annual Selected Acquisition Reports (SAR) from 2009 to 2013 for the 78 programs in DOD’s current portfolio of major defense acquisition programs. SAR data was collected from the Defense Acquisition Management Information Retrieval (DAMIR) Purview system, referred to as DAMIR. We assessed the reliability of the data by reviewing existing information about DAMIR and determined that the data were sufficiently reliable for the purposes of this report. We analyzed the performance metrics to determine the extent to which programs were reporting changes to key performance parameters. Our analysis was limited to unclassified requirements that are included as part of the SAR. In October 2014, we developed and submitted a questionnaire to 28 major defense acquisition programs that had reported key requirement changes in their respective SAR from 2009 to 2013, sought requirement relief from the Joint Requirements Oversight Council, or experienced a development cost increase or decrease of 10 percent or more between 2011 and 2013. We conducted two pretests of the questionnaire prior to distribution to ensure that our questions were clear, unbiased, and consistently interpreted. We obtained responses from all 28 programs, and in cases where questionnaire results differed from previously collected SAR data, we submitted follow-up questions to the program office to adjudicate any discrepancies. To determine the extent to which programs that experienced development cost growth also changed key requirements, we compared the research, development, test and evaluation cost estimates from 2009 and 2013 for DOD’s current portfolio of major defense acquisition programs, as reported in their annual SAR. In instances where a program began development after 2009, we compared the program’s initial research, development, test and evaluation cost estimate with its 2013 current estimate. We then reviewed any program that had a cost increase of more than 20 percent to determine if this program also reported key requirement changes in its annual reports for the same time period. We also leveraged prior and ongoing GAO work on weapon system acquisition. We conducted this audit from October 2014 to June 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, John E. Oppenheim, Assistant Director; Jacob Leon Beier; Brandon H. Greene; Laura M. Jezewski; Megan Porter; Abby C. Volk; Marie P. Ahearn; Peter W. Anderson; Jean L. McSween; and Kristy E. Williams made key contributions to this report.
GAO has reported extensively on problems in cost, schedule, and performance for major defense acquisition programs. According to some acquisition reform advocates, expanding the role of the military service chiefs in the process to acquire weapon systems may improve acquisition outcomes. Following a 2014 GAO report on the service chiefs' role in the acquisition chain of command, GAO was asked to review further related issues and concerns the service chiefs have with the acquisition process and its outcomes. This report examines: (1) the views of current and former military service chiefs on the acquisition process, and (2) key problems or factors the service chiefs identified with the acquisition process and GAO's assessment of these issues. GAO conducted interviews with 12 current and former military service chiefs and vice chiefs, and with other current and former DOD leadership to discuss the acquisition process. GAO also drew upon its extensive body of work on defense acquisitions and best practices. To assess key problems with the current process, GAO reviewed program execution information on all 78 current major defense programs. Most current and former military service chiefs and vice chiefs GAO interviewed from the Army, Air Force, Navy, and Marine Corps collectively expressed dissatisfaction with acquisition program outcomes and believed that the Department of Defense's (DOD) requirements development and acquisition processes need to be better integrated. The service chiefs are largely responsible for developing the services' requirements for weapon systems, while the service acquisition executives are responsible for overseeing programs to plan and develop systems. Most service chiefs told GAO they were concerned that after weapon system requirements are handed to the acquisition process, requirements are changed or added by the acquisition community (sometimes referred to as “creep”), increasing the capabilities and cost of the system. Some service chiefs stated that they are not always involved in the acquisition process and are frequently caught by surprise when cost, schedule, and performance problems emerge in programs. Current and former chiefs agreed that the chiefs should be more involved in programs, but their views varied on how best to achieve this. GAO analyzed requirements for all 78 major defense acquisition programs and found that creep—or growth—in the high-level requirements is rare. Instead, it is after a program has formally started development that the myriad lower-level, technical requirements needed to complete a weapon system's design are defined (see figure). It is the definition of these requirements—most of which occurs after the service chiefs' primary involvement—that leads to the realization that much more time and resources are needed to build the weapon system. The process of systems engineering translates high-level requirements, such as range, into specifics, like fuel tank size. GAO has previously reported on the importance of conducting systems engineering early so that the consequences of high-level requirements can be confronted before a program starts. When GAO presented its analysis of the problem to the service chiefs, they generally agreed with it. Several noted that trade-offs informed by systems engineering must take place before programs start so that requirements are better defined and more realistic cost, schedule, and performance commitments can be made. GAO recommends that DOD ensure sufficient systems engineering is conducted to better define requirements and assess resource trade-offs before a program starts. DOD concurred with the recommendations, citing recent policy changes. GAO believes more focus is needed on implementing actions.
The creation of DHS in November 2002 represents the most significant transformation of the U.S. government since 1947, when the various branches of the U.S. Armed Forces were combined into the Department of Defense (DOD) to better coordinate the nation’s defense against military threats. In January 2003, we cited numerous management and leadership challenges facing DHS as it attempted to merge 22 separate federal agencies, and we designated the department’s transformation as high risk. Shortly thereafter, the department stated that it faced significant transformational challenges, such as (1) developing new business processes, (2) unifying multiple organizational structures, (3) integrating multiple border-security and interior-enforcement functions, (4) integrating information technology (application systems and infrastructures), and (5) improving information sharing. The magnitude of these challenges is enormous. For example, DHS reports that it has redundancies in such business processes as human resources management, financial management, and procurement—including about 300 application systems that support inconsistent and duplicative processes. DHS also reports that it plans to invest about $4.1 billion during fiscal year 2004 in IT for both new and existing systems, to more effectively and efficiently support its mission operations and business processes. An enterprise architecture is a key tool for effectively and efficiently overcoming the kinds of transformational challenges that face DHS. In short, it is a business and technology blueprint that links an organization’s strategic plan to the program and supporting system implementations that are needed to systematically move the organization from how it operates today to how it intends to operate tomorrow. As we have repeatedly reported, without an enterprise architecture to guide and constrain IT investments, it is unlikely that an organization will be able to transform its business processes and modernize its supporting systems in a way that minimizes overlap and duplication, and thus costs, and maximizes interoperability and mission performance. According to DHS’s strategic plan, its mission is to lead a unified national effort to secure America by preventing and deterring terrorist attacks and protecting against and responding to threats and hazards to the nation. DHS also is to ensure safe and secure borders, welcome lawful immigrants and visitors, and promote the free flow of commerce. As part of its responsibilities, the department must also coordinate and facilitate the sharing of information both among its component agencies and with other federal agencies, state and local governments, the private sector, and other entities. As illustrated in DHS’s organizational structure (see fig. 1), to accomplish its mission it has five under secretaries with responsibility over the directorates or offices for management, science and technology, information analysis and infrastructure protection, border and transportation security, and emergency preparedness and response. Each DHS directorate is responsible for leading its specific homeland security mission area and coordinating relevant efforts with other federal agencies and state and local governments. The department is also composed of other component organizations, such as the U.S. Coast Guard and the U.S. Secret Service. Table 1 describes the primary roles of these five directorates and several of these component organizations. Within the Management directorate is the DHS Office of the Chief Information Officer (CIO), which has primary responsibility for addressing departmentwide information technology integration issues. According to the CIO, this office’s responsibilities include developing and facilitating the implementation of such integration enablers as the department’s IT strategic plan and its enterprise architecture. The CIO released an initial version of the enterprise architecture in September 2003 and plans to issue the next version in September 2004. According to the CIO, updated releases of the architecture will be issued on an annual basis. To provide the necessary leadership, direction, and management to create the architecture, the CIO established various entities and assigned specific responsibilities to each. Table 2 describes the key architecture entities and individuals involved in developing and maintaining the architecture, along with their respective responsibilities. Effective use of enterprise architectures is a trademark of successful public and private organizations. For a decade, we have promoted the use of architectures to guide and constrain systems modernization, recognizing them as a crucial means to a challenging goal: establishing agency operational structures that are optimally defined in both business and technological environments. Congress, OMB, and the federal CIO Council have also recognized the importance of an architecture-centric approach to modernization. The Clinger-Cohen Act of 1996 mandates that an agency’s CIO develop, maintain, and facilitate the implementation of an IT architecture. This should provide the means for managing the integration of business processes and supporting systems. Further, the E-Government Act of 2002 requires OMB to oversee the development of enterprise architectures within and across agencies. Generally speaking, an enterprise architecture connects an organization’s strategic plan with program and system solution implementations by providing the fundamental information details needed to guide and constrain implementable investments in a consistent, coordinated, and integrated fashion. An enterprise architecture provides a clear and comprehensive picture of an entity, whether it is an organization (e.g., federal department) or a functional or mission area that cuts across more than one organization (e.g., homeland security). This picture consists of snapshots of both the enterprise’s current or “As Is” operational and technological environment and its target or “To Be” environment, as well as a capital investment road map for transitioning from the current to the target environment. These snapshots further consist of “views,” which are basically one or more architecture products that provide conceptual or logical representations of the enterprise. The suite of products and their content that form a given entity’s enterprise architecture are largely governed by the framework used to develop the architecture. Since the 1980s, various frameworks have emerged and been applied. For example, John Zachman developed a structure or framework for defining and capturing an architecture. This framework provides for six windows from which to view the enterprise, which Zachman calls “perspectives” on how a given entity operates: the perspectives of (1) the strategic planner, (2) the system user, (3) the system designer, (4) the system developer, (5) the subcontractor, and (6) the system itself. Zachman also proposed six abstractions or models that are associated with each of these perspectives: these models cover (1) how the entity operates, (2) what the entity uses to operate, (3) where the entity operates, (4) who operates the entity, (5) when entity operations occur, and (6) why the entity operates. Other frameworks also exist. Each of these frameworks use somewhat unique nomenclatures to define themselves. However, they all generally provide for defining an enterprise’s operations in both (1) logical terms, such as interrelated business processes and business rules, information needs and flows, and work locations and users and (2) technical terms, such as hardware, software, data, communications, and security attributes and performance standards. The frameworks also provide for defining these perspectives for both the enterprise’s current or “As Is” environment and its target or “To Be” environment, as well as a transition plan for moving from the “As Is” to the “To Be” environment. Our research and experience show that for major program investments, such as the development of an enterprise architecture, successful organizations approach product development in an incremental fashion, meaning that they initially develop a foundational product that is expanded and extended through a series of follow-on products that add more capability and value. In doing so, these organizations can effectively mitigate the enormous risk associated with trying to deliver a large and complex product that requires the execution of many activities over an extended period of time as a single monolithic product. In effect, this incremental approach permits a large undertaking to be broken into a series of smaller projects, or incremental versions, that can be better controlled to provide reasonable assurance that expectations are met. The importance of developing, implementing, and maintaining an enterprise architecture is a basic tenet of both organizational transformation and IT management. Managed properly, an enterprise architecture can clarify and help to optimize the interdependencies and relationships among an organization’s business operations and the underlying IT infrastructure and applications that support these operations. Employed in concert with other important management controls—such as portfolio-based capital planning and investment control practices—architectures can greatly increase the chances that an organization’s operational and IT environments will be configured to optimize mission performance. Our experience with federal agencies has shown that investing in IT without defining these investments in the context of an architecture often results in systems that are duplicative, not well integrated, and unnecessarily costly to maintain and interface. For the last 2 years, we have promoted the development and use of a homeland security enterprise architecture. In June 2002, we testified on the need to define the homeland security mission and the information, technologies, and approaches necessary to perform this mission in a way that is divorced from organizational parochialism and cultural differences. At that time, we stressed that a particularly critical function of a homeland security architecture would be to establish processes and information/data protocols and standards that could facilitate information collection and permit sharing. In January 2003, when we designated DHS’s transformation as high risk, we again emphasized the need to develop and implement an enterprise architecture. In May 2003 testimony, we reiterated this need, stating that, for DHS to be successful in addressing threats of domestic terrorism, it would need to establish effective systems and processes to facilitate information sharing among and between government entities and the private sector. We stated that to accomplish this the department would need to develop and implement an enterprise architecture. In August 2003, we reported that DHS had begun to develop an enterprise architecture and that it planned to use this architecture to assist its efforts to integrate and share information between federal agencies and among federal agencies, state and city governments, and the private sector. In November 2003, we reported on DHS’s progress in establishing key enterprise architecture management capabilities, as described in our architecture management maturity framework. This framework associates specific architecture management capabilities with five hierarchical stages of management maturity, starting with creating architecture awareness and followed by building the architecture management foundation, developing the architecture, completing the architecture, culminating in leveraging the architecture to manage change (see table 3 for a more detailed description of the stages). Based on information provided by DHS, we reported that the department had established an architecture management foundation and was developing architecture products, and we rated the department to be at stage 3 of our maturity framework. In particular, we reported that it had (1) established a program office responsible for developing and maintaining the architecture; (2) assigned a chief architect to oversee the program; (3) established plans for developing metrics for measuring progress, quality, compliance, and return on investment; and (4) placed the architecture products under configuration management. According to our framework, effective architecture management is generally not achieved until an enterprise has a completed and approved architecture that is being effectively maintained and is being used to leverage organizational change and support investment decision making. An enterprise with these characteristics would need to satisfy all of the requirements associated with stage 3 of our framework, and many of the requirements of stages 4 and 5. In addition, we reported in May 2004 that DHS was in the process of defining its strategic IT management framework for, among other things, integrating its current and future systems and aligning them with the department’s strategic goals and mission. We also reported that a key component of this initiative was the development of the department’s enterprise architecture. Accordingly, we recommended that, until the framework was completed, the department limit its spending on IT investments to cost-effective efforts that take advantage of near-term, relatively small, low-risk opportunities to leverage technology in satisfying a compelling homeland security need; support operations and maintenance of existing systems that are critical to DHS’s mission; involve deploying an already developed and fully tested system; or support establishment of a DHS strategic IT management framework, including IT strategic planning, enterprise architecture, and investment management. In May 2003, the department’s CIO testified that development of the homeland security enterprise architecture had begun in July 2002 and that the department expected to complete the “As Is” and “To Be” architectures by June and August 2003, respectively. The CIO also stated that the department would develop a migration or transition strategy and plan by fall 2003 in order to achieve its target environment. Moreover, the CIO testified that DHS had coordinated its architecture development efforts with other key federal agencies (e.g., the Departments of Justice, Energy, and Defense), the intelligence community, and the National Association of State and Local CIOs. In October 2003, DHS’s CIO testified that the department had completed the first version of its target architecture in September 2003 and was beginning to implement the objectives of its transition strategy. The CIO stated that the department had designed and delivered a comprehensive and immediately useful business-driven target architecture in under 4 months and that the architecture was enabling DHS to make IT investment decisions. On February 6, 2002, OMB established the FEA Program Management Office and charged it with responsibility for developing the FEA. According to OMB, the FEA is intended to provide a governmentwide framework to guide and constrain federal agencies’ enterprise architectures and IT investments and is now being used by agencies to help develop their budgets and to set strategic goals. The FEA is composed of five reference models: Performance, Business, Service, Data, and Technical. To date, versions of all but the data reference model have been released for use by the agencies. More information on each reference model follows. Performance reference model. The performance reference model is intended to describe a set of performance measures for major IT initiatives and their contribution to program performance. According to OMB, this model will help agencies produce enhanced performance information; improve the alignment and better articulate the contribution of inputs, such as technology, to outputs and outcomes; and identify improvement opportunities that span traditional organizational boundaries. Version 1.0 of the model was released in September 2003. Business reference model. The business reference model serves as the foundation for the FEA. It is intended to describe the federal government’s businesses, independent of the agencies that perform them. The model consists of four business areas: (1) services for citizens, (2) mode of delivery, (3) support delivery of services, and (4) management of government resources. Thirty-nine lines of business, which together are composed of 153 subfunctions, make up the four business areas. Examples of lines of business under the “services for citizens” business area are homeland security, law enforcement, and economic development. Each of these lines of business includes a number of subfunctions. For example, for the homeland security line of business, a subfunction is border and transportation security; for law enforcement, a subfunction is citizen protection; and for economic development, a subfunction is financial sector oversight. Version 1.0 of the business reference model was released to agencies in July 2002, and OMB reports that it was used in the fiscal year 2004 budget process. According to OMB, Version 1.0 helped to reveal that many federal agencies were involved in each line of business and that agencies’ proposed IT investments for fiscal year 2004 offered multibillion-dollar consolidation opportunities. In June 2003, OMB released Version 2.0 which, according to OMB, addresses comments from agencies and reflects changes to align the model as closely as possible with other governmentwide management frameworks (e.g., budget function codes) and improvement initiatives (e.g., the President’s Budget Performance Integration Initiative) without compromising its intended purpose. OMB expects agencies to use the model, as part of their capital planning and investment control processes, to help identify opportunities to consolidate IT investments across the federal government. Service component reference model. The service component reference model is intended to identify and classify IT service (i.e., application) components that support federal agencies and promote the reuse of components across agencies. According to OMB, this model is intended to provide the foundation for the reuse of applications, application capabilities, components (defined as “a self-contained business process or service with predetermined functionality that may be exposed through a business or technology interface”), and business services, and is organized as a hierarchy beginning with seven service domains, as shown in table 4. These service domains are decomposed into 29 service types, which together are further broken down into 168 components. For example, the customer services domain is made up of 3 service types: customer relationship management, customer preferences, and customer-initiated assistance. Components of the customer relationship management service type include call center management and customer analytics; components of the customer preferences service type include personalization and subscriptions; and components of the customer-initiated assistance service type include online help and online tutorials. Version 1.0 of the service component reference model was released in June 2003. According to OMB, the model is a business-driven, functional framework that classifies service components with respect to how they support business and/or performance objectives. Further, the model is structured across horizontal service areas that, independent of the business functions, is intended to provide a leverageable foundation for the reuse of applications, application capabilities, components, and business services. Data and information reference model. The data and information reference model is intended to describe the types of data and information that support program and business-line operations and the relationships among these types. The model is intended to help describe the types of interactions and information exchanges that occur between the government and its customers. OMB officials told us that the release of Version 1.0 is to occur imminently. Technical reference model. The technical reference model is intended to describe the standards, specifications, and technologies that collectively support the secure delivery, exchange, and construction of service components. OMB describes the model as being made up of the following four core service areas: Service access and delivery: the collection of standards and specifications that support external access, exchange, and delivery of service components. Service platform and infrastructure: the delivery platforms and infrastructure that support the construction, maintenance, and availability of a service component or capability. Component framework: the underlying foundation, technologies, standards, and specifications by which service components are built, exchanged, and deployed. Service interface and integration: the collection of technologies, methodologies, standards, and specifications that govern how agencies will interface internally and externally with a service component. Each of these service areas is made up of service categories, which identify lower levels of technologies, standards, and specifications; service standards, which define the standards and technologies that support the service category; and the service specification, which details the standard specification or the provider of the specification. For example, within the first core service area (service access and delivery), an example of a service category is access channels, and examples of service standards are Web browsers and wireless personal digital assistants. Examples of service specifications for the Web browser service standard are Internet Explorer and Netscape Navigator. Version 1.0 of the technical reference model was released in January 2003, followed in August 2003 by Version 1.1, which reflected minor revisions that were based, in part, on agencies’ comments. Version 1.1 was used during the 2005 budget process. The model is intended to help agencies define their target technical architectures. In May 2004, we testified that, through the FEA, OMB is attempting to provide federal agencies and other decision makers with a common frame of reference or taxonomy for informing agencies’ individual enterprise architecture efforts and their planned and ongoing investment activities and to do so in a way that, among other things, identifies opportunities for avoiding duplication of effort and launching initiatives to establish and implement common, reusable, and interoperable solutions across agency boundaries. We testified that we supported these goals. However, we also recognized that development and use of the FEA is but the first step in a multistep process to realize the promise of interagency solutions. In addition, because the FEA is still maturing both in content and in use, we raised a number of questions that we believed OMB needed to address in order to maximize understanding about the tool and thus facilitate its advancement. Specifically, we asked the following: Should the FEA be described as an enterprise architecture? As we discussed earlier, a true enterprise architecture is intended to provide a blueprint for optimizing an organization’s business operations and implementing the IT that supports them. Accordingly, well-defined enterprise architectures describe, in meaningful models, both the enterprise’s “As Is” and “To Be” environments, along with the plan for transitioning from the current to the target environment. To be meaningful, these models should be inherently consistent with one another, in view of the many interrelationships and interdependencies among, for example, business functions, the information flows among the functions, the security needs of this information, and the services and applications that support these functions. Our reading of the four available reference models does not demonstrate to us that this kind of content exists in the FEA and thus we believe that it is more akin to a point-in-time framework or classification scheme for federal government operations. Accordingly, if agencies use the FEA as a model for defining the depth and detail for their own architectures, the agencies’ enterprise architectures may not provide sufficient content for driving the implementation of their systems. Is the expected relationship between agencies’ enterprise architectures and the FEA clearly articulated? Among other things, the FEA is to inform agencies’ enterprise architectures. For example, OMB has stated that although it is not mandating that the business reference model serve as the foundation for every agency’s business architecture, agencies should invest time mapping their respective business architectures to the FEA. Similarly, OMB has stated that agencies’ alignment of their respective architectures to the services component reference model and the technical reference model will enable each agency to categorize its IT investments according to common definitions. In our view, such descriptions of the agency enterprise architecture/FEA relationship are not clear, in part because definitions of such key terms as alignment, mapping, and consistency are not apparent in the FEA. As with any endeavor, the more ambiguity and uncertainty there is in requirements and expectations, the greater the use of assumptions; the more assumptions that are made, the higher the risk of deviation from the intended course of action. This is particularly true in the area of enterprise architecture. How will the security aspects of the FEA be addressed? Our work has found that a well-defined enterprise architecture should include explicit discussion of security, including descriptions of security policies, procedures, rules, standards, services, and tools. Moreover, security is an element of the very fabric of architecture artifacts and models and thus should be woven into them all. As our experience in reviewing agency security practices and our research into leading practices shows, security cannot be an afterthought when it comes to engineering systems or enterprises. OMB has stated that it plans to address security through what it refers to as a “security profile” to be added to the FEA. However, OMB could not comment on the profile’s status or on development plans for it, beyond stating that the CIO Council is taking the lead in developing the profile. The initial version of DHS’s enterprise architecture is missing many of the key elements of a well-defined architecture. Further, those elements that are in the initial version are not based on the department’s strategic business plan, as architecture development best practices advocate. Instead, the architecture is largely the result of combining the architectures and ongoing IT investments that several of the 22 agencies brought with them when the department was formed. According to DHS senior architecture officials, including the chief architect, Version 1.0 was developed in this manner because it pre-dated completion of the department’s first strategic plan, only had limited staff assigned to it, and needed to be done in only 4 months in order to meet OMB’s deadline for submitting the department’s fiscal year 2004 IT budget. They also stated that this initial version was intended to mature the department’s approach and methodology for developing the next version of the architecture, rather than to develop a version of the architecture that could be acted on and implemented. As a result, even though Version 1.0 provides a partial foundation upon which to build a well-defined architecture, DHS has spent and continues to spend large sums of money on IT investments without having such an architecture to effectively guide and constrain these investments. Our experience with federal agencies has shown that this often results in systems that are duplicative, are not well integrated, are unnecessarily costly to maintain and interface, and do not effectively optimize mission performance. As previously discussed, the various frameworks used to develop architectures consistently provide for describing a given enterprise in both logical and technical terms, and for doing so for both the enterprise’s current or “As Is” environment and its target or “To Be” environment; these frameworks also provide for defining a capital investment sequencing plan to transition from the “As Is” to the “To Be” environment. However, the frameworks do not prescribe the degree to which the component parts should be described to be considered correct, complete, understandable, and usable—essential attributes of any architecture. This is because the depth and detail of the descriptive content depend on what the architecture is to be used for (i.e., its intended purpose). DHS’s stated intention is to use an architecture as the basis for departmentwide and national operational transformation and supporting systems modernization and evolution. The CIO stated that the department was already using the architecture to help guide IT investment decisions. This purpose necessitates that the architecture products provide considerable depth and detail, as well as logical and rational structuring and internal linkages. More specifically, it means that these architecture products should contain sufficient scope and detail so that, for example, (1) duplicative business operations and systems are eliminated; (2) business operations are standardized and integrated and supporting systems are interoperable; (3) use of enterprisewide services is maximized; and (4) related shared solutions are aligned, like OMB’s e-government initiatives. Moreover, this scope and detail should be accomplished in a way that (1) provides flexibility in adapting to changes in the enterprise’s internal and external environments; (2) facilitates the architecture’s usefulness and comprehension from varying perspectives, users, or stakeholders; and (3) provides for properly sequencing investments to recognize, for example, the investments’ respective dependencies and relative business value. While the initial version of the architecture does provide some content that can be used to further develop it, it does not contain sufficient breadth and depth of departmentwide operational and technical requirements to effectively guide and constrain departmentwide business transformation and systems modernization efforts. More specifically, we found that DHS’s “To Be” architecture products (Version 1.0) do not satisfy 14 of 34 (41 percent) key elements and only partially satisfy the remaining 20 (59 percent), and that its transition plan only partially satisfies 3 of 5 elements (60 percent) and does not satisfy the remaining 2 (40 percent) (see fig. 2.). This means that while Version 1.0 does provide some of the foundational content that can be used to extend and expand the architecture, it does not yet provide an adequately defined frame of reference to effectively inform business transformation and system acquisition and implementation decision making. Our specific analysis of the “To Be” and transition plan products follows. “To Be” Architecture: According to relevant guidance, a “To Be” architecture should capture the vision of future business operations and supporting technology. That is, it should describe the desired capabilities, structures (e.g., entities, activities, and roles), and relationships among these structures at a specified time frame in the future. It should also describe, for example, future business processes, information needs, and supporting infrastructure characteristics, and it should be fiscally and technologically achievable. More specifically, a well-defined “To Be” architecture should provide, among other things, a description of the enterprise’s business strategy, including its desired future concept of operations, its strategic goals and objectives, and the strategic direction to be followed to achieve the desired future state; future business processes, functions, and activities that will be performed to support the organization’s mission, including the entities that will perform them and the locations where they will be performed; a logical database model that identifies the primary data categories and their relationships, which are needed to support business processes and to guide the creation of the physical databases where information will be stored; the systems to be acquired or developed and their relative importance in supporting the business operations; the enterprise application systems and system components and their the policies, procedures, processes, and tools for selecting, controlling, and evaluating application systems; the technical standards to be implemented and their anticipated life the physical infrastructure (e.g., hardware and software) that will be needed to support the business systems; common policies and procedures for developing infrastructure systems throughout their life cycles; security and information assurance-related terms; the organizations that will be accountable for implementing security and the tools to be used to secure and protect systems and data; a list of the protection mechanisms (e.g., firewalls and intrusion detection software) that will be implemented to secure the department’s assets; and the metrics that will be used to evaluate the effectiveness of mission operations and supporting system performance in achieving mission goals and objectives. Architectures that include these elements can provide the necessary frame of reference to enable the engineering of business solutions (processes and systems) in a manner that optimally supports departmentwide goals and objectives, such as information sharing. Version 1.0 of the department’s “To Be” architecture provides some of the descriptive content mentioned above. For example, it contains (1) a high- level business strategy that includes a vision statement and a list of projects that may become future technology solutions; (2) a list of systems to be acquired or developed; (3) a description of the enterprise application systems and system components; (4) the technical standards to be implemented; (5) a description of the physical infrastructure (e.g., hardware and software) that will be needed to support the business systems; (6) definitions of security and information assurance-related terms; (7) a list of protection mechanisms, such as firewalls; and (8) high- level performance metrics. However, the business strategy does not define the desired future concept of operations, the business-specific objectives to be achieved, and the strategic direction to be followed. Such content is important because the “To Be” architecture must be based on and driven by business needs. In contrast to this, the DHS “To Be” architecture is primarily focused on how to employ technology to improve current mission operations and services, instead of on identifying and addressing needed business changes through the use of technology. In addition, the systems listed are not described in terms of their relative importance to achieving the department’s vision based on business value and technical performance, and the application systems and system components are not linked to the specific business processes they will support. Further, the technical standards are incomplete (e.g., do not specify standards that support narrowband wireless access) and do not include the anticipated life cycle of each standard. The physical infrastructure description is too high level (e.g., it does not define networks and their configurations or relate the technology platforms to specific applications and business functions). The architecture also does not define certain security and information assurance-related terms (e.g., security services) and, in some instances, it defines other terms (e.g., authentication and availability) differently than do the department’s homeland security partners. For example, DHS’s definitions of authentication, availability, confidentiality, and integrity differ from DOD’s definitions of these terms. In addition, the list of protection mechanisms is neither complete, nor does it describe all of the mechanisms shown and the interrelationships among them. Other key elements that are not included are (1) a description of future business processes, functions, and activities that will be performed to support the organization’s mission, including the entities or people that will perform them and the locations where they will be performed; (2) a logical database model; (3) the policies, procedures, processes, and tools for selecting, controlling, and evaluating application systems; (4) common policies and procedures for developing infrastructure systems throughout their life cycles; (5) the organizations that will be accountable for implementing security and the tools to be used to secure and protect systems and data; and (6) explicit metrics for the department’s primary (e.g., identifying threats and vulnerabilities and facilitating the flow of people and goods) and mission-support (e.g., human resources and budget and finance) business areas. Detailed results of our analysis are provided in appendix II. Transition Plan: According to relevant guidance and best practices, the transition plan should provide a temporal road map for moving from the “As Is” to the “To Be” environment. An important step in the development of a well-defined transition plan is a gap analysis—a comparison of the “As Is” and “To Be” architectures to identify differences. Other important steps include analyses of technology opportunities and marketplace trends, as well as assessments of fiscal and budgetary realities and institutional acquisition and development capabilities. Using such analyses and assessments, options are explored and decisions are made regarding which legacy systems to retain, modify, or retire and which new systems to introduce on a tactical (temporary) basis or to pursue as strategic solutions. Accordingly, transition plans identify legacy, migration, and new systems and sequence them to show, for example, the phasing out and termination of systems and capabilities and the timing of the introduction of new systems and capabilities, and they do so in light of resource constraints such as budget, people, acquisition/development process maturity, and associated time frames. Version 1.0 of DHS’s transition plan generally does not possess any of these attributes. Specifically, it does not (1) include a gap analysis identifying the needed changes to current business processes and systems; (2) identify the legacy systems that will not become part of the “To Be” architecture or the time frames for phasing them out; (3) show a time-based strategy for replacing legacy systems, including identifying intermediate (i.e., migration) systems that may be temporarily needed; or (4) define the resources (e.g., funding and staff) needed to transition to the target environment. The result is that DHS does not have a meaningful and reliable basis for managing the disposition of its legacy systems or for sequencing the introduction of modernized business operations and supporting systems. Detailed results of our analysis are in appendix III. A DHS contractor responsible for evaluating the quality of Version 1.0 reported weaknesses that are similar to ones that we identified. For example, the contractor reported the following: The “To Be” architecture did not address the reality that the department’s systems would be a federated combination of legacy and new systems for many years. Rather, it assumes that its systems will be transformed into an ideal future state in the near term. The “To Be” architecture did not consistently address topics at the same level of detail, and it contained inconsistencies (i.e., some topics were addressed in more detail than others). The architecture did not sufficiently address security (i.e., network, data, physical, and information). DHS’s senior enterprise architecture officials, including the chief architect, agreed with the results of our analysis and stated that considerable work remained to adequately address all of the key architectural elements. According to the CIO and these officials, this initial version was prepared in 4 months, with limited resources (i.e., three DHS staff and minimal contractor support), based on the information available at that time. Further, it was prepared primarily to meet OMB’s fiscal year 2004 IT budget submission deadline and to help educate DHS’s senior executives about the importance of this architecture in the department’s overall transformation effort. Senior architecture officials also stated that a transition plan was not intended to be part of the scope of Version 1.0, and that the department’s initial focus was on maturing its ability to execute an approach and methodology for developing the next version of the architecture. Notwithstanding these reasons, constraints, and intentions, the fact remains that Version 1.0 is missing important content and, without this content, the department—as well as homeland security stakeholders in other federal agencies, state and local governments, and the private sector—will not have the sufficiently detailed, authoritative frame of reference that is needed to provide a common understanding of future homeland security operational, business, and supporting technology needs. Such a frame of reference is important to effectively guide and constrain the transformation of mission operations, business functions, and associated IT investments. Without it, DHS and other homeland security stakeholders will be challenged in their ability to effectively leverage technology to affect the kind of logical and systematic institutional change needed to optimize enterprisewide mission performance. The various architecture frameworks and architecture management best practices recognize the need to define the “To Be” environment using a top- down, business-driven approach in which the content of the organization’s strategic plan (mission, goals, objectives, scope, and outcomes) drives operational processes, functions, activities, and associated information needs, which in turn drive system application and services and supporting technology standards. The architecture development methodology being employed by DHS also calls for this top-down, mission- and business- driven approach, which engages mission and business area subject matter experts. It specifically states that an architecture should be based on a functional business model that reflects the nature of the operational mission, the business strategy, and the information to be used to accomplish them. DHS did not follow this approach in developing Version 1.0 of its architecture, primarily because the department did not issue a strategic plan until February 2004. Specifically, the department released its initial architecture in September 2003, approximately 5 months before it issued its strategic plan. Without an explicit strategic direction to inform the architecture, the architecture’s business representation was derived from the existing architectures and the ongoing and planned IT investments of some of its component agencies (i.e., Immigration and Naturalization Service, Customs and Border Protection, Coast Guard, and the Federal Emergency Management Agency). As a result, Version 1.0 did not contain a departmentwide and national corporate business strategy that described such things as (1) the desired future state of its mission operations and business activities, (2) the specific goals and objectives to be strategically achieved, and (3) the strategic direction to be followed by the department to realize the desired future state. Rather, the architecture’s strategic operational and business content is basically the sum of its component agencies’ business strategy parts. Moreover, although the department is using generally accepted architecture development techniques, the architecture artifacts that have been derived using these techniques (i.e., the value chain analysis, CURE matrix, conceptual data model, and sequencing diagram) do not provide a consistent view of the scope of the department’s mission. In some instances, the vision focuses internally on departmental activities only, while in other instances, it focuses on homeland security at a national level (i.e., addresses other homeland security stakeholders, such as other federal agencies and state and local government). The architecture’s business strategy also does not identify corporate priorities and constraints to be considered when making departmentwide and national decisions about future homeland security activities. The DHS contractor responsible for evaluating the quality of Version 1.0 made similar comments concerning the architecture’s business strategy. Specifically, the contractor reported that the scope of the architecture was unclear, at times being internally focused on only the department, while at other times being more broadly focused on national homeland security. Further, the contractor reported that the business strategy did not include all DHS mission activities. According to the chief architect, the fiscal year 2004 IT budget submission deadline did not allow the department to delay development of the architecture until the strategic plan had been completed. A senior architecture official also stated that this time constraint (i.e., 4-month development period) did not allow subject matter experts (i.e., both internal and external DHS stakeholders) to be consulted and to participate in developing Version 1.0. According to the CIO, subject matter experts are now participating in the department’s architecture development activities. As stated above, having a mission- and business-driven enterprise architecture is a fundamental principle. Until the department uses an enterprisewide understanding of its mission operations and business as the basis for developing its architecture, its architecture’s utility will be greatly diminished, and it is unlikely that changes to existing operations and systems that are based on this architecture will provide for optimization of mission performance and satisfaction of stakeholder needs. Moreover, because DHS did not base Version 1.0 on such an understanding, the content of this version may prove to be invalid if future work shows that the strategic business assumptions used to develop it were inaccurate. This in turn would limit the value of Version 1.0 as a basis for building the next version. OMB guidance does not explicitly require agency enterprise architectures to align with the FEA. However, a requirement for alignment is implicit in the OMB guidance. For example, this guidance states that agencies’ major IT investments must align with each of the FEA’s published reference models (business, performance, services, and technical), and that agencies’ nonmajor IT investments must align with the business reference model. Since an agency’s enterprise architecture is to include a transition plan that strategically sequences its planned IT investments in a way that moves the agency from its current architectural environment to its target environment, this means that the agency’s investments would need to align with both the FEA and the agency enterprise architecture, which in turn would necessitate alignment between the FEA and the agency architecture. Aligning agencies’ architectures with the FEA is also an implied requirement in recently released OMB guidance for its enterprise architecture assessment tool. According to this guidance, agency enterprise architectures are “a basic building block to support the population of the FEA.” Further, OMB states that one of the purposes of the FEA is to inform agency efforts to develop their agency-specific enterprise architectures. We have previously reported that OMB’s expected relationship between the FEA and agency enterprise architectures has not been clearly articulated, in part because OMB has not defined key terms, such as architectural alignment. In the absence of clear definitions, we also reported that assumptions must be made about what alignment means, and that the greater the use of assumptions, the greater the chances of expectations about these relationships not being met and intended outcomes not being realized. For the purposes of this report, we have assumed that alignment can be examined from three perspectives: functional, structural, and semantic. Functional alignment means that the architecture and the reference models have been decomposed to the same level of detail to determine if the business operations, services, and technology components are similar in nature and purpose. Structural alignment means that the architecture and the reference models are both constructed similarly, for example, they may share the same hierarchical construct whereby information is grouped by common levels of detail. Semantic alignment means that the department’s architecture and the FEA reference models use similar terms and/or definitions that can be mapped to one another. The FEA and Version 1.0 of DHS’s enterprise architecture are not aligned functionally or structurally. Specifically, we could not map Version 1.0 to the FEA from either of these perspectives because the DHS architecture is not decomposed to the same level of detail as the reference models and thus does not permit association of the respective functional components and because the DHS architecture is not structured in a hierarchical fashion as the reference models are. However, the terms or definitions used in the business, services, and technical components of Version 1.0 could be mapped to similar terms in the FEA business, services, and technical reference models. The results of this mapping are discussed below. We mapped all 79 of the high-level activities that we found in the business view of the DHS architecture to similar terms in the FEA business reference model. To achieve this degree of mapping, however, we needed to trace the 79 high-level activities to multiple levels of the reference models, including business areas, lines of business, and subfunctions. For example, for the DHS high-level business activity “stockpile and deploy supplies” (defined as including managing immunizations, as well as identification, acquisition, development, maintenance, and distribution of other pharmaceutical and medical supplies) we needed to go to the reference model’s subfunction level to find “immunization management.” In addition, we were not able to map any terms in the DHS architecture to several areas in the business reference model that would appear relevant to DHS, such as the business area “mode of delivery” or the line of business “defense and national security.” We also mapped Version 1.0’s applications/services view to the FEA services reference model. In this case, the initial architecture contained terms that could be associated with all of the FEA reference model’s 7 service domains, 29 service types, and 168 service components. We also mapped terms used for technical services in the Version 1.0 technical view to the FEA technical reference model. However, this mapping was again based on associating high-level descriptions in Version 1.0 with lower-level descriptions in the FEA. Moreover, some terms for technology elements in Version 1.0 could not be mapped to the FEA technical reference model, such as “narrowband wireless and broadband wireless.” Conversely, some technical services and standards in the reference model that should be applicable to DHS were not evident in Version 1.0, such as software engineering (including test management services) and database middleware standards, respectively. In those instances where we could not semantically associate Version 1.0 to the FEA reference models, we found no associated explanations. As a result, we could not determine whether future alignment is envisioned or not. According to the CIO and the chief architect, the steps that DHS took to align the initial architecture with the FEA reference models represent the most that could be done in the time available. The architect also stated that changes would be made to the technical views of the architecture to more closely reflect the content within the FEA’s technical reference model. However, given that what is meant by agency architectural alignment to the FEA is not well defined, the degree to which DHS and other agencies can establish the intended relationship between the two is both challenging and uncertain. This in turn will constrain OMB’s ability to meet the goals it has set for the FEA. Having and using an enterprise architecture that reflects the department’s strategic operational and business needs and enables it to make informed decisions about competing investment options is critical to DHS’s business transformation and supporting system modernization efforts. DHS recognizes this and has produced an initial architecture in a short time with limited resources and is working on its next version. Nevertheless, the department is in the midst of transforming itself and investing hundreds of millions of dollars in supporting systems without a well-defined architecture to effectively guide and constrain these activities. Following this approach is a risky proposition, and the longer DHS goes without a well-defined and enforced architecture the greater the risk. Therefore, it is important that DHS ensure that the next version is based on a top-down, strategic business-based approach that involves key stakeholders, as advocated by best practices. It is also important for DHS to ensure that its architecture includes the necessary content. Until this is done, it will be prudent for the department to limit new system investments to those meeting certain criteria, as we have previously recommended. To do less puts the department at risk of investing hundreds of millions of dollars in efforts that will not promote integration and interoperability and will not optimize mission performance. Further, the relationships that OMB expects between the FEA and agency architectures, including DHS’s are not clear. Until OMB clarifies what it means by architectural alignment, it is unlikely that the outcomes it envisions and desires through architectural alignment will result. To ensure that DHS has a well-defined architecture to guide and constrain pressing transformation and modernization decisions, we recommend that the Secretary of Homeland Security direct the department’s architecture executive steering committee, in collaboration with the CIO, to (1) ensure that the development of DHS’s enterprise architecture is based on an approach and methodology that provides for identifying the range of mission operations and the focus of the business strategy and involving relevant stakeholders (external and internal) in driving the architecture’s scope and content; and (2) develop, approve, and fund a plan for incorporating into the architecture the content that is missing. In addition, we are recommending 39 actions to ensure that future versions of the architecture include (1) the six key elements governing the business view of the “To Be” architectural content that our report identified as not being fully satisfied, (2) the three key elements governing the performance view of the “To Be” architectural content that our report identified as not being fully satisfied, (3) the seven key elements governing the information view of the “To Be” architectural content that our report identified as not being fully satisfied, (4) the five key elements governing the services/applications view of the “To Be” architectural content that our report identified as not being fully satisfied, (5) the six key elements governing the technical view of the “To Be” architectural content that our report identified as not being fully satisfied, (6) the seven key elements governing the security view of the “To Be” architectural content that our report identified as not being fully satisfied, and (7) the five key elements governing the transition plan content that our report identified as not being fully satisfied. In addition, to assist DHS and other agencies in developing and evolving their respective architectures, we recommend that the Director of OMB direct the FEA Program Management Office to clarify the expected relationship between the FEA and federal agencies’ architectures. At a minimum, this clarification should define key terms, such as architectural alignment. In DHS’s written comments on a draft of this report, signed by the Director, Bankcard Programs and GAO/OIG Liaison within the Office of the Chief Financial Officer (reprinted in app. IV), the department agreed that much work remains to develop both a target enterprise architecture and a transition plan to support business and IT transformation, and it stated that it would ensure that the architecture criteria that we cite in our report, which our recommendations reference, are addressed to the extent possible in Version 2.0 of its architecture. Notwithstanding these statements, the department also stated that it took exception to several aspects of our report, including our criteria and recommendations. In particular, it stated that (1) the criteria were not realistic and assumed the existence of a comprehensive enterprise architecture, the development of which was inconceivable in the time available, (2) the criteria had not been provided to the federal community and were not available when Version 1.0 of DHS’s architecture was being developed, and (3) the recommendations did not take into consideration the department’s limited resources. DHS stated that it had accomplished one of the most important goals of Version 1.0, which was “positioning the department to more actively engage with our business representatives, with a strategic plan in hand and a greater awareness of the need for and value of an enterprise architecture generally on the part of our senior and executive management.” DHS also provided specific comments on our findings relative to each criterion that we assessed Version 1.0 against. These comments fell into two general categories: agree that content is missing but content was not intended to be part of Version 1.0, and do not agree that content is missing (i.e., factually incorrect). We do not agree with the department’s comments concerning the criteria and our recommendations. In particular, our report does not state that DHS should have ensured that Version 1.0 of its enterprise architecture satisfied all of the criteria that we cite. We have long held and reported the position that enterprise architecture development should be done incrementally, with each version of an architecture providing greater depth and detail to an enterprisewide, business-driven foundational layer. We provide in the report an analytical assessment of where Version 1.0 stands against a benchmark of where it will need to be in order to be an effective blueprint to guide and constrain major investment decisions for organizational transformation. In doing so, we have provided the department with a road map, grounded in explicit criteria, for incrementally developing a mission- derived blueprint. In addition, the criteria that we used in our review and cite in our report came from published literature on the content of enterprise architectures, which we structured into categories consistent with federal enterprise architecture guidance and have used in prior evaluations of other agencies’ enterprise architectures, the results of the first of which we issued in September 2003. We shared these criteria and categories with DHS at the time that we began our review, and we shared the results of our review relative to each criterion with DHS enterprise architecture program and contractor officials over a 2-day period after we completed our review. At that time, both the DHS and the contractor officials agreed with our results. While we acknowledge that we had yet to publish our categorization of the criteria at the time that Version 1.0 of DHS’s architecture was being developed, the criteria that we drew from and used were both well established and publicly available. In addition, we recognize in both the report and its recommendations the point made in DHS’s comments about the initial architecture development effort being constrained by resources. It is because of this that our recommendations call for DHS’s architecture executive steering committee, which is composed of those department business and technology executives who collectively control billions of dollars in resources, to develop, approve, and fund a plan for completing the architecture. In our view, the resource point cited in DHS’s comments is a departmental funding allocation and prioritization decision, rather than a resource shortage issue. Also, we do not question the department’s comment concerning the intent and goal of Version 1.0, or whether its goal has been accomplished. Rather, the purpose and scope of our work was to determine the extent to which the initial architecture version contained the building blocks of a well- defined blueprint and to thereby identify what, if anything, remained to be accomplished. If more needed to be done, our objective was to determine whether the initial version provided a foundation upon which to build any missing content. As we previously stated, development of a well-defined enterprise architecture is by necessity incremental, and our report is intended to provide DHS with a criteria-based road map for incrementally accomplishing this. To avoid any misunderstanding about the need to develop the architecture incrementally, we have added further detail on this topic to this report. With respect to DHS’s specific comments on each of the findings and recommendations that acknowledged missing content, we support the department’s statements indicating that it will address this missing content in the next or subsequent versions of the architecture. However, we do not agree with DHS’s comments when it stated that our findings were factually incorrect or when it disagreed with the criteria. Our responses to DHS’s comments for each of these areas of disagreement are provided in appendix IV. In their oral comments on a draft of this report, OMB’s Office of E-Government and Information Technology and the Office of General Counsel officials stated that OMB’s Administrator for Electronic Government, Information and Technology had recently testified that additional work was needed to mature the FEA. In addition, the officials stated that OMB is committed to working to evolve the FEA and agency enterprise architectures, and that this work will clarify many of the issues raised in our report. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Homeland Security and to the Director of OMB. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions on matters discussed in this report, please contact me at (202) 512-3439 or [email protected]. Key contributors to this report are acknowledged in appendix V. Our objectives were to determine whether the initial version of the Department of Homeland Security’s (DHS) enterprise architecture (1) provides a foundation upon which to build and (2) is aligned with the Federal Enterprise Architecture (FEA). To address the first objective, we followed the approach that we have previously used to evaluate the content of an agency’s enterprise architecture. Specifically, we first segmented Version 1.0 of the architecture into the three primary component parts of any architecture: the “As Is,” the “To Be,” and the transition plan. We then further divided the “As Is” and “To Be” architectures into five architectural components similar to the Office of Management and Budget’s (OMB) architecture reference models and defined in our enterprise architecture maturity framework: business, information/data, services/applications, technical, and performance; we added security as a sixth component because of its recognized importance in the various architecture frameworks and its relevance to the other five architectural components. Because the department is currently investing about $4.1 billion in fiscal year 2004 for IT systems and supporting infrastructure, we focused our evaluation on the “To Be” architecture and the transition plan and did not analyze whether DHS’s “As Is” architecture satisfied relevant “As Is” guidance. For each of these six architectural components, we used the key architectural requirements that we previously reported as necessary for a well-defined “To Be” architecture. We also used the key architectural requirements that we previously reported as necessary for a well-defined transition plan. We then compared the “To Be” architecture and transition plan (Version 1.0) against the key elements. In doing so, we used the following criteria to determine whether the key element was fully, partially, or not satisfied. To assess the extent to which the architecture was aligned with the FEA, we compared the “To Be” architecture with the FEA business, services- and technical reference models, Versions 2.0, 1.0, and 1.1, respectively. We did not select the performance, information/data, and security models because department officials told us that these models were not part of the scope of their effort in developing Version 1.0. We, therefore, focused on the business, services, and technical models and attempted to map the architecture and FEA reference models at three levels: semantic, functional, and structural. However, we were unable to do so from a functional or structural standpoint because the DHS architecture was neither decomposed to the same level of detail, nor constructed in a hierarchical fashion like the reference models. We therefore mapped key elements of the DHS architecture (e.g., business activities, target applications, and services) to the reference models by identifying similar terms and/or definitions. To augment our documentation reviews and analyses of the architecture, we also interviewed various officials, including the chief information officer and chief architect to determine, among other things, these officials’ comments on our detailed analysis. We also met with OMB officials to discuss its process for reviewing agencies’ enterprise architectures and the results of its review of DHS’s architecture. According to OMB officials, its review of DHS’s architecture is still ongoing and thus we were not provided a copy of the review results. We conducted our work at DHS headquarters in Washington, D.C. We performed our work from November 2003 to May 2004 in accordance with generally accepted government auditing standards. Element satisfied? A business assessment that includes the enterprise’s purpose, scope (e.g., organizations, business areas, and internal and external stakeholders’ concerns), limitations or assumptions, and methods. The architecture does not contain a business assessment or gap analysis results. A gap analysis that describes the target outcomes and shortfalls, including strategic business issues, conclusions reached as a result of the analysis (e.g., missing capabilities), causal information, and rationales. However, the architecture recognizes the need to perform a business assessment and project- specific gap analyses. It also identifies possible concerns (e.g., inefficiencies in business function and technology) that may be addressed by the department. A business strategy that describes the desired future state of the business, the specific objectives to be achieved, and the strategic direction that will be followed by the enterprise to realize the desired future state. The architecture does not have a business strategy that adequately describes the desired future state of the business, the objectives to be achieved, and the strategic direction to be followed. However, the architecture does address to a limited degree the characteristics of a business strategy, as discussed below. A vision statement that describes the business areas requiring strategic attention based on the gap analysis. The architecture does contain a vision statement; however, this statement does not highlight opportunities for strategic change to business processes, nor does it present a consistent view of the national responsibilities for homeland security at the various levels (i.e., federal, state, local, and international). A description of the business priorities and constraints, including their relationships to, at a minimum, applicable laws and regulations, executive orders, departmental policy, procedures, guidance, and audit reports. The architecture recognizes that homeland security processes, procedures, and decisions about IT management should comply with applicable laws, regulations, and guidance, particularly those associated with privacy requirements. The architecture also specifically mentions the National Strategy for Homeland Security. However, the architecture does not explicitly identify, reconcile, prioritize, or align the applicable laws, regulations, and guidance. As a result, business priorities and constraints are not identified. Element satisfied? A description of the scope of business change that is to occur to address identified gaps and realize the future desired business state. The scope of change, at a minimum, should identify expected changes to strategic goals, customers, suppliers, services, locations, and capabilities. The architecture does not explicitly identify what will be changed in the “As Is” environment. It also does not explicitly identify key customers, suppliers, products, services, locations, and capabilities for homeland security at the national level. A description of the measurable strategic business objectives to be met to achieve the desired change. The architecture does not describe measurable strategic business objectives; however, it does contain objectives in the transition strategy that may be used to develop strategic business objectives. A description of the measurable tactical business goals to be met to achieve the strategic objectives. The architecture does not describe measurable tactical business goals; however, it does describe some high-level performance measures for several of its business areas. A listing of opportunities to unify and simplify systems or processes across the department, including their relationships to solutions that align with the strategic initiatives to be implemented to achieve strategic objectives and tactical goals. The architecture does not align all opportunities for change with strategic initiatives and potential investments. However, the architecture does identify conceptual projects and opportunities to address inefficiencies in systems and processes. Common (standard and departmentwide) policies, procedures, and business and operational rules for consistent implementation of the architecture. A description of key business processes and how they support the department’s mission, including the organizational units responsible for performing the business processes and the locations where the business processes will be performed. This description should provide for the consistent alignment of (1) applicable federal laws, regulations, and guidance; (2) department policies, procedures, and guidance; (3) operational activities; (4) organizational roles; and (5) operational events and information. A description of the operational management processes to ensure that the department’s business transformation effort remains compliant with the business rules for fault, performance, security, configuration, and account management. X Element satisfied? A description of the organizational approach (processes and organizational structure) for communications and interactions among business lines and program areas for (1) management reporting, (2) operational functions, and (3) architecture development and use (i.e., how to develop the architecture description, implement the architecture, and govern/manage the development and implementation of the architecture). A description of the processes for establishing, measuring, tracking, evaluating, and predicting business performance regarding business functions, baseline data, and service levels. The architecture does not describe these processes. However, the architecture recognizes the need for such processes and identifies a conceptual project and a business activity that will be used to establish these processes. A description of measurable business goals and outcomes for business products and services, including strategic and tactical objectives. The architecture does not describe explicit measurable business goals and outcomes for any of the department’s primary and secondary business areas (e.g., identify threats and vulnerabilities; and prevent, prepare and recover from incidents). However, the architecture does provide a description of customer-focused, measurable business goals and outcomes (e.g., the average time taken to resolve customer inquiries) for all of the department’s primary and secondary business areas (e.g., human resources and budget and finance), with one exception (i.e., the architecture does not contain customer-focused, measurable goals and outcomes for the primary line of business entitled “facilitate the flow of people and goods”). Element satisfied? A description of measurable technical goals and outcomes for managing technology products and services for the “To Be” architecture that enables the achievement of business goals and outcomes. The architecture does not contain measurable technical goals and outcomes for managing technology products and services that enables the achievement of business goals and outcomes (e.g., identifying threats and preventing terrorist attacks). However, the architecture does contain performance measures for managing technology (e.g., percentage of data or information shared across organizational units and time to produce, create, and deliver products or services). The architecture also lists conceptual projects focused on improving technology management performance with respect to information sharing (e.g., infrastructure consolidation). A description of data management policies, procedures, processes, and tools (e.g., CURE matrix) for analyzing, designing, building, and maintaining databases in an enterprise architected environment. The architecture does not describe or reference enterprise data management policies, procedures, or processes. However, the architecture does contain a CURE matrix. The utility of this CURE matrix for planning purposes is questionable because the relationships among business functions and applications are ambiguous—not uniquely identified or defined. A description of the business and operational rules for data standardization to ensure data consistency, integrity, and accuracy, such as business and security rules that govern access to, maintenance of, and use of data. A data dictionary, which is a repository of standard data definitions for applications. The architecture does not contain a data dictionary. However, the architecture does contain an information dictionary that, while incomplete, does identify some data objects (e.g., cargo, incident, and weapon). As a result, this information glossary could be used to facilitate the creation of a data dictionary. Element satisfied? A conceptual data model that describes the fundamental things/objects (e.g., business or tourist visas, shipping manifests) that make up the business, without regard for how they will be physically stored. A conceptual data model contains the content needed to derive facts about the business and to facilitate the creation of business rules. It represents the consolidated structure of business objects to be used by business applications. The architecture does not provide a conceptual data model that contains the content needed to derive facts about the business and to facilitate the creation of business rules to build databases. The content is at such a high level (e.g., labels and terms) that it can be interpreted in numerous ways. However, the architecture does provide a high- level conceptual data model that identifies “super-classes” or groupings of objects without the required business context, such as (1) the complete definitions for information categories or classes and (2) concrete business objects. This information can be used to build the conceptual data model. A logical database model that provides (1) a normalized (i.e., nonredundant) data structure that supports information flows and (2) the basis for developing the schemas for designing, building, and maintaining physical databases. A metadatamodel that specifies the rules and standards for representing data (e.g., data formats) and accessing information (e.g., data protocols) according to a documented business context that is complete, consistent, and practical. A description of the information flows and relationships among organizational units, business operations, and system elements. X Element satisfied? A description of the services and their relationships to key end-user services to be provided by the application systems. The architecture does not specify all the end- user services to be provided by application systems (e.g., the use of e-mail as an end-user service for various applications), nor does it provide a rationale for this exclusion. It also does not specify the various relationships between the end-user services and the entities that will provide these services. The architecture also contains inconsistencies in the descriptions of the relationships between user services and application systems, which affect its utility. For example, in one instance, the architecture notes that correspondence management may involve “maintaining logs and references to pieces of correspondence that are of interest to the enterprise for tracking purposes and that these pieces of correspondence may be e-mails, paper letters, phone conversations, etc.” In another instance, the architecture does not recognize the use of this e-mail service for managing correspondence. However, the architecture does contain high-level descriptions of the types of application systems that will be needed (e.g., a financial management application that can manage all financial aspects of general accounting, budgeting, capital assets, and investment control). It also notes that “To Be” applications will be derived and created based on how each user class uses data while performing business activities. Element satisfied? A list of application systems (acquisition/development and production portfolio) and their relative importance to achieving the department’s vision, based on business value and technical performance. The architecture does not identify the applications’ relative importance to the overall vision. For example, it does not explicitly identify and describe application systems that support functionality across organizational boundaries (e.g., local, state, and federal agencies). In addition, priorities are not explicitly defined for the target applications. However, the architecture provides a list of the types of candidate applications (e.g., financial, grant, and property management) and links these application types to business functions by providing an application-to-function cross- reference matrix. In addition, it identifies conceptual projects that may provide target capabilities or applications and it prioritizes these projects according to scheduled completion times. For example, some conceptual projects are placed within a category labeled “Rationalize,” which means they are scheduled for completion within 6 months. A description of the policies, procedures, processes, and tools for selecting, controlling, and evaluating application systems to enable effective IT investment management. A description of the enterprise application systems and system components and their interfaces. The architecture does not describe applications in terms of the business process flows that each application will support (e.g., how to identify and report threats and vulnerabilities), nor does the architecture describe the business process flows. The architecture also does not reflect how application selection decisions can or will be made without this information. Further, it does not identify human/machine boundaries, inputs, outputs, controls, and standard application programming interfaces. However, the architecture contains a list and graphic depictions of the types of application systems that would satisfy the department’s business needs, including a brief description of the functionality to be provided by these systems. For example, it describes a generic “financial management” application that could be satisfied by many application packages or development components. Element satisfied? A description of the system development life cycle process for application development or acquisition and the integration of the process with the architecture, including policies, procedures, and architectural techniques and methods for acquiring systems throughout their life cycles. The common technical approach should also describe the process for integrating legacy systems with the systems to be developed/acquired. A list of infrastructure systems and a description of the systems’ hardware and software infrastructure components. The description should also reflect the system’s relative importance to achieving the department’s vision based on constraints, business value, and technical performance. The architecture does not provide a complete list of the “To Be” infrastructure systems, nor does it describe the functional characteristics, capabilities, and interconnections for the infrastructure projects listed. It also does not reflect the systems’ relative importance to achieving DHS’s vision. For example, the relationship between the department’s vision for infrastructure projects and their value in preventing terrorist attacks has not been defined. However, it does identify a conceptual project (i.e., OneDHS) that may be used to consolidate the infrastructure. It also identifies a list of conceptual applications (e.g., a communications management application to manage connectivity between networks) that may provide certain infrastructure capabilities and functions for OneDHS. Further, it identifies associated subprojects, such as a secure network, server and storage consolidation, and a standard desktop environment, and it associates them with the business areas. The architecture also lists several existing infrastructure systems, such as the Department of Defense’s (DOD) Secure Internet Protocol Routing Network, which may be used by the department and its homeland security partners. The architecture outlines an approach for establishing a framework to enable DHS to sequence the delivery of capabilities over time based on homeland security priorities. A description of the policies, procedures, processes, and tools for selecting, controlling, and evaluating infrastructure systems to enable effective IT investment management. X Element satisfied? A description of the technical reference model (TRM) that describes the enterprise infrastructure services, including specific details regarding the functionality and capabilities that these services will provide to enable the development of application systems. The architecture does not contain a TRM that describes all enterprise infrastructure services. The list of technical services is likely incomplete because the architecture does not identify all DHS organizations and its homeland security partners that supply and consume technical services. For example, the architecture indicates the use of DOD’s Secure Internet Protocol Routing Network, which is a Global Information Grid (GIG) enterprise service, to exchange information among homeland security organizations. However, it does not list the technical services that are provided by this network. The architecture also does not show whether these TRM services are common or reusable. In addition, the architecture does not describe the functionality and capabilities that will be provided by the services that are identified. However, it does contain a high-level TRM that provides a structure and vocabulary that can be used to describe DHS’s enterprise infrastructure services. It also contains application principles (e.g., there will be only one enterprise application for each function area, to be used by all departmental organizations) and technology patterns (e.g., use of commercial-off-the-shelf software for implementing relational databases) that can be used to guide technology development and acquisition decisions. Element satisfied? A description in the TRM that identifies and describes (1) the technical standards to be implemented for each enterprise service and (2) the anticipated life cycle of each standard. The architecture does not contain a complete standards profile (i.e., it excludes technical standards that support a number of the services reflected in the TRM). For example, the profile does not identify standards that support “narrowband wireless access,” even though there are applicable homeland security applications that require this service (e.g., Land Mobile Radio, Air to Ground Communications, Mobile Operations IT). It also does not list the actual life cycles (e.g., “sunset” dates for current products and standards, and dates for when new developments will use target technologies) of many of the standards and products identified in the architecture. However, it does contain a list of technical standards that the department and/or its partners may implement. A description of the physical IT infrastructure needed to design and acquire systems, including the relationships among hardware, software, and communications devices. The architecture does not provide a description of the physical IT infrastructure that will be needed to support future operations. Specifically, it does not fully describe networks and their topologies and configurations for the department’s internal and/or shared spaces. For example, the architecture does not identify the component parts of the DHS consolidated network. It also does not relate the technology platforms to applications and business functions. However, the architecture does provide a vision for the technology environment, such as a high- level diagram that depicts information sharing among user groups. It also identifies telecommunications backbone options for exchanging data, such as use of the Internet for sensitive but unclassified data. The architecture also identifies types of technology platforms, including computing, storage, and communication devices and software. Common policies and procedures for developing infrastructure systems throughout their life cycles, including requirements management, design, implementation, testing, deployment, operations, and maintenance. These policies and procedures should also address how the applications will be integrated, including legacy systems. X Element satisfied? A description of the policies, procedures, goals, strategies, principles, and requirements relevant to information assurance and security and how they (the policies, procedures, goals, strategies, and requirements) align and integrate with other elements of the architecture (e.g., security services). The architecture does not describe the policies, procedures, goals, strategies, principles, and requirements that are relevant to information assurance and security, nor their alignment and integration with other architecture elements. However, it does contain (1) a high-level diagram that depicts a data classification schema to facilitate information sharing (e.g., sensitive but unclassified or top secret); (2) a security pattern that can be used to provide capabilities to secure and protect IT resources (e.g., confidentiality via encryption, authorization and access control via single sign-on, and intrusion detection and prevention using firewalls); and (3) a security principle that reflects the requirement for sharing information contained within nonclassified systems. This information could be used to develop a strategy. Definitions of terms related to security and information assurance. The architecture does not define all key terms that are listed (e.g., “information assurance” and “security services”). In addition, there are discrepancies between DHS’s security terms and others involved in homeland security, such as DOD. For example, DOD’s definitions for authentication, availability, confidentiality, and integrity differ from DHS’s definitions for the same terms. However, the architecture does contain definitions for some security-related terms (e.g., “identification and authorization” and “audit trail”). A listing of accountable organizations and their respective responsibilities for implementing enterprise security services. It is important to show organizational relationships in an operational view because they illustrate fundamental roles (e.g., who conducts operational activities) and management relationships (e.g., what is the command structure or relationship to other key players) and how these influence the operational nodes. A description of operational security rules that are derived from security policies. X Element satisfied? A description of enterprise security infrastructure services (e.g., identification and authentication) that will be needed to protect the department’s assets and the relationship of these services to protective mechanisms. The architecture’s TRM does not explicitly identify the security services, making it difficult to ensure that there are no redundant services, nor does it clearly define what constitutes a technical security service. In addition, the architecture identifies DOD’s Secure Internet Protocol Routing Network, thereby implying the use of a GIG enterprise service, but it does not reconcile how or whether these services will be used by DHS and other homeland security entities. However, the architecture does provide some guidance on security services, and it lists several services to be used to secure and protect resources, such as confidentiality, data integrity, authentication, and policy enforcement. A description of the security standards to be implemented for each enterprise service. These standards should be derived from security requirements. This description should also address how these services will align and integrate with other elements of the architecture (e.g., security policies and requirements). The architecture does not contain a complete list of standards. For example, it does not include standards for several security services (e.g., network security/intrusion detection systems and single sign-on) nor does it provide a rationale for excluding them. Further, the architecture does not explain how DHS will communicate with other extended architecture systems (e.g., DOD and Department of State) if those systems require certain standards to support DHS systems. However, the architecture does contain a list of several security standards that may be associated with security services. A description of the protection mechanisms (e.g., firewalls and intrusion detection software) that will be implemented to secure the department’s assets, including a description of the interrelationships among these protection mechanisms. The architecture does not contain a complete list of the protection mechanisms needed, nor does it describe all these mechanisms and the interrelationships among them. For example, protection mechanisms have not been identified for monitoring and auditing activities, biometrics, control and protection, computer forensics tools, and computer intrusion and alarm. Moreover, the architecture indicates that security requirements have not been analyzed, thereby bringing into question the validity of the protection mechanisms identified. However, the architecture does contain a list of protection mechanisms, such as firewalls. Element satisfied? Analysis of the gaps between the baseline and the target architecture for business processes, information/data, and services/application systems to define missing and needed capabilities. A high-level strategyfor implementing the enterprise architecture. The architecture does not have specific milestones for any actual projects that will deploy systems. However, the architecture does identify specific time- phased milestones for conceptual projects. For example, it notes that projects categorized as “Quick Hits” will be completed within 6 months, projects to consolidate duplicate systems within less than 2 years, and projects that optimize systems after 2 years. The architecture does not contain explicit metrics that can be implemented or assessed, but it recognizes the need for such metrics. However, the architecture does contain high-level metrics, such as “the percent of data/information shared across organizational units” that may be used to establish detailed metrics. Financial and nonfinancial resources needed to A listing of the legacy systems that will not be A description of the training strategy/approach that will be implemented to address the changes made to the business operations (processes and systems) to promote operational efficiency and effectiveness. This plan should also address any changes to existing policies and procedures that affect day-to-day operations, as well as resource needs (staffing and funding). Element satisfied? A list of the systems to be developed, acquired, or modified to achieve business needs and a description of the relationship between the system and the business need(s). A strategy for employing enterprise application integration (EAI) plans, methods, and tools to, for example, provide for efficiently reusing applications that already exist, concurrent with adding new applications and databases. The architecture does not contain a strategy for employing EAI plans, methods, and tools, nor does it describe how EAI will be used to integrate legacy and future systems. However, it does list technologies, products, and standards for EAI. It also contains a vision for a service-oriented architecture that may be developed into an EAI strategy. A technical (systems, infrastructure, and data) migration plan that shows the transition from legacy to replacement systems, including explicit sunset dates and intermediate systems that may be temporarily needed to sustain existing functionality during the transition period. an analysis of system interdependencies, including the level of effort required to implement related systems in a sequenced portfolio of projects that includes milestones, time lines, costs, and capabilities. a cost estimate for the initial phase(s) of the transition and a high-level cost projection for the transition to the target architecture. A strategy that describes the architecture’s governance and control structure and the integrated procedures, processes, and criteria (e.g., investment management and security) to be followed to ensure that the department’s business transformation effort remains compliant with the architecture. The architecture does not include an architecture governance and control structure and the integrated procedures, processes, and criteria to be followed. However, the architecture recognizes the need for a governance structure and contains a high-level discussion of governance that focuses on identifying the most critical governance issues and challenges, making general recommendations for dealing with these, and establishing the context in which appropriate managers, process owners, and subject matter experts will develop process details. The following are GAO’s comments on the Department of Homeland Security’s (DHS) letter dated July 23, 2004. 1. See the “Agency Comments and Our Evaluation” section of this report. 2. We agree that Version 1.0 included a high-level (or overview) business model that offered some descriptive information on weaknesses, such as potential areas of inefficiencies or overlaps in current departmental business functions and technology. However, the underlying business assessment that would form the basis for a clear statement of the enterprise’s purpose, scope, limitations, assumptions, and methods for successful business transformation was not present, and DHS provided no evidence that such an assessment had been performed. For example, for the areas that the business model overview identified as potential areas of inefficiency or overlap, the architecture did not provide the supporting analysis. The architecture also did not provide a time frame for completing such an assessment or state that one would be performed. Further, when we concluded our analysis and shared our findings with senior DHS architecture officials and supporting contractor personnel, they agreed with our finding. 3. We acknowledge the department’s comment that the business strategy and vision statement were not within the scope of the initial architecture description. However, we note that this comment is inconsistent with DHS’s intent to describe the top two rows of the Zachman framework, because these rows include this information. Moreover, as stated in our report, best practices require that the architecture be based on the business strategy and states that to do otherwise negatively affects the architecture’s utility and makes it unlikely that changes to existing operations and systems will provide for optimum mission performance and satisfaction of stakeholders’ needs. In addition, while we do not question that the business strategy and vision statement are included in DHS’s strategic plan, we did not evaluate this plan because it was issued 5 months after the approval of Version 1.0. Further, when we concluded our analysis and shared our findings with senior DHS architecture officials and supporting contractor personnel, they agreed with our findings. 4. As stated in the report, we do not question the department’s intent for Version 1.0 of the architecture or whether these goals have been achieved. However, our analysis shows that important architecture artifacts that would be expected to be included in this version and that are associated with the top two rows of the Zachman framework were not included in the architecture description. 5. We acknowledge the department’s comment that Version 1.0 of the architecture did not contain measurable strategic business objectives or tactical business goals, as evidenced by our finding that this information was missing. In addition, while we do not question that this information is included in DHS’s strategic plan, we would note that we did not evaluate this plan because it was issued 5 months after the approval of Version 1.0. With respect to the governance strategy and plan, the former outlined the steps to be taken to develop such a strategy, and the latter was not contained within Version 1.0, nor was it provided separately. Further, as previously noted, we do not question the intent of Version 1.0. Further, when we concluded our analysis and shared our findings with senior DHS architecture officials and supporting contractor personnel, they agreed with our findings. 6. We disagree. While we acknowledge that there are high-level business functions and activities in the business model, the model did not define business processes. Business process descriptions have a definitive beginning and end and reflect the interrelationships among business functions and activities. The functions and activities described in Version 1.0 had not been decomposed to a sufficient level of operational detail to describe routine tasks (e.g., develop mitigation strategies to minimize the impact of the threat). Further, when we concluded our analysis and shared our findings with architecture officials and supporting contractor personnel, they agreed with the criteria and with our findings. 7. Version 1.0 of the architecture did not include a communications plan, nor was such a plan provided separately. However, we do agree that effective management reporting will depend on DHS’s ability to collect the right information for the architecture program. 8. The organizational chart referred to in this comment was not provided to GAO. 9. We disagree. While we acknowledge that the architecture contains a high-level or abstract conceptual data model, we found that the model lacked the information for the business owner’s view of data and for the creation of a conceptual data model that can be used to develop the logical database model as required by the Zachman framework, which DHS has acknowledged that it is following to develop its architecture. Specifically, this would require that the conceptual data model (1) include concrete business objects, (2) enable facts about the business to be derived, and (3) facilitate the development and validation of business rules. Further, when we concluded our analysis and shared our findings with senior DHS architecture officials and supporting contractor personnel, they agreed with the criteria and our findings. 10. The focus of our review was the content of Version 1.0 of the architecture. We did not evaluate the “OneDHS” initiative as part of this effort because it was identified in the architecture as a conceptual project. 11. We disagree. While we acknowledge that the architecture indicated the need to perform project-specific gap analyses, these analyses were not included in Version 1.0, and DHS did not provide any evidence that such analyses had been performed. In addition, the department did not provide a time frame for completing them. Further, when we concluded our analysis and shared our findings with senior DHS architecture officials and supporting contractor personnel, they agreed with our findings. 12. We disagree. While we acknowledge that conceptual projects were linked with proposed IT investments (i.e., exhibit 300s), the architecture did not show the correlation among the projects and the potential investments. To show this correlation, DHS would have needed to reflect the extent to which the identified business need— which should be based on a gap analysis—would be addressed by the proposed investment, and this explanation would be documented within the architecture. However, the architecture did not contain this information or a time frame for when it would be provided. The architecture also did not include information on the approval status of these proposed investments. Further, when we concluded our analysis and shared our findings with senior DHS architecture officials and supporting contractor personnel, they agreed with our findings. 13. We disagree. Version 1.0 of the DHS architecture does not provide sufficient information to differentiate between existing and new systems. In addition, the architecture did not include an analysis that identified existing systems that would be terminated. Further, when we concluded our analysis and shared our findings with senior DHS architecture officials and supporting contractor personnel, they agreed with our findings. 14. We disagree. The architecture does not contain detailed training approaches, strategies, or plans. Instead, the architecture contains high-level briefings that refer to planned activities to determine the needs for training based on anticipated changes. These needs, once identified, may be used to develop a business-specific plan for change management and training. Further, when we concluded our analysis and shared our findings with senior DHS architecture officials and supporting contractor personnel, they agreed with our findings. 15. We disagree. While we acknowledge that the architecture listed the names of both existing systems and several systems under development, it did not identify which of these systems would be developed, modified, acquired, and/or used as intermediate systems until the target system has been deployed to meet specific future business needs. Further, when we concluded our analysis and shared our findings with senior DHS architecture officials and supporting contractor personnel, they agreed with our findings. 16. We disagree. We acknowledge that the architecture included a sequencing diagram that graphically associated the components and the conceptual projects. However, the architecture did not provide either an explanation of the graphically depicted relationships or an analysis of the interdependencies. DHS also did not provide evidence that such an analysis had been performed. Further, when we concluded our analysis and shared our findings with senior DHS architecture officials and supporting contractor personnel, they agreed with our findings. Staff who made key contributions to this report were Joseph Cruz, Joanne Fiorino, Anh Le, Randolph Tekeley, and William Wadsworth. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.”
The Department of Homeland Security (DHS) is attempting to integrate 22 federal agencies, each specializing in one or more interrelated aspects of homeland security. An enterprise architecture is a key tool for effectively and efficiently accomplishing this. In September 2003, DHS issued an initial version of its architecture. Since 2002, the Office of Management and Budget (OMB) has issued various components of the Federal Enterprise Architecture (FEA), which is intended to be, among other things, a framework for informing the content of agencies' enterprise architectures. GAO was asked to determine whether the initial version of DHS's architecture (1) provides a foundation upon which to build and (2) is aligned with the FEA. DHS's initial enterprise architecture provides a partial foundation upon which to build future versions. However, it is missing, either in part or in total, all of the key elements expected to be found in a well-defined architecture, such as descriptions of business processes, information flows among these processes, and security rules associated with these information flows, to name just a few. Moreover, the key elements that are at least partially present in the initial version were not derived in a manner consistent with best practices for architecture development. Instead, they are based on assumptions about a DHS or national corporate business strategy and, according to DHS, are largely the products of combining the existing architectures of several of the department's predecessor agencies, along with their respective portfolios of system investment projects. DHS officials agreed that their initial version is lacking key elements, and they stated that this version represents what could be done in the absence of a strategic plan, with limited resources, and in the 4 months that were available to meet an OMB deadline for submitting the department's fiscal year 2004 information technology budget request. In addition, they stated that the next version of the architecture, which is to be issued in September 2004, would have much more content. As a result, DHS does not yet have the necessary architectural blueprint to effectively guide and constrain its ongoing business transformation efforts and the hundreds of millions of dollars that it is investing in supporting information technology assets. Without this, DHS runs the risk that its efforts and investments will not be well integrated, will be duplicative, will be unnecessarily costly to maintain and interface, and will not optimize overall mission performance. The department's initial enterprise architecture can be traced semantically with the FEA, which means that similar terms and/or definitions of terms can be found in the respective architectures. However, traceability in terms of architecture structures and functions is not apparent. Because of this, it is not clear whether the substance and intent of the respective architectures are in fact aligned, meaning that, if both were implemented, they would produce similar outcomes. This is due at least in part to the fact that OMB has yet to clearly define what it expects the relationship between agencies' enterprise architectures and the FEA to be, including what it means by architectural alignment.
A safe and secure aviation system is a critical component to securing the nation’s overall physical infrastructure and maintaining its economic vitality. Billions of dollars and a myriad of programs and policies have been devoted to achieving such a system. Critical to ensuring aviation security are screening checkpoints, at which screening personnel check over 2 million individuals and their baggage each day for weapons, explosives, and other dangerous articles that could pose a threat to the safety of an aircraft and those aboard it. All passengers who seek to enter secure areas at the nation’s airports must pass through screening checkpoints and be cleared by screeners. In addition, many airline and airport employees, including flight crews, ground personnel, and concession vendors, have to be cleared by screeners. At the nation’s 429 commercial airports that are subject to security requirements, screeners use a variety of technologies and procedures to screen individuals. These include x-ray machines to examine carry-on baggage, metal detectors to identify any hidden metallic objects, and physical searches of items, including those that cannot be scanned by x-rays, such as baby carriers or baggage that has been x-rayed and contains unidentified objects. In response to the terrorist attacks of September 11, 2001, the Federal Aviation Administration (FAA) and the air carriers implemented new security controls to improve security. These actions included increased screening of baggage and passengers at airport checkpoints with the use of explosives trace detection devices and hand-held metal detectors, the mandatory removal of laptop computers from carrying cases, and the removal of shoes. They included additional screening of randomly selected passengers at an airline’s boarding gate. Although these initiatives have been a visible sign of heightened security procedures, they have also, in some instances, caused longer security delays, inconvenienced the traveling public, and raised questions about the merits of using these techniques on assumed lower-risk travelers, such as young children. Congress has also taken actions to improve aviation security. In November 2001, it passed the Aviation and Transportation Security Act, which transferred aviation security from FAA to the newly created TSA and directed TSA to take over responsibility for airport screening. The Act also left to TSA’s discretion whether to “establish requirements to implement trusted passenger programs and use available technologies to expedite security screening of passengers who participate in such programs, thereby allowing security screening personnel to focus on those passengers who should be subject to more extensive screening.” In response to this Act, officials representing aviation and business travel groups have proposed developing a registered traveler program. Under their proposals, travelers who voluntarily provide personal information and clear a background check would be enrolled as registered travelers. These participants would receive some form of identification, such as a card that includes a unique personal characteristic like a fingerprint, which they would use at an airport to verify their identity and enrollment in the program. Because they would have been prescreened, they would be entitled to different security screening procedures at the airport. These could be as simple as designating a separate line for registered travelers, or could include less intrusive screening. Although TSA had initially resisted such a program because of concerns that it could weaken the airport security system, it has recently changed its position and has begun assessing the feasibility and need for such a program and considering the implementation of a test program. The concept underlying a registered traveler program is similar to one that TSA has been studying for transportation workers—a Transportation Worker Identity Credential (TWIC)—that could be used to positively identify transportation workers such as pilots and flight attendants and to expedite their processing at airport security checkpoints. TSA had been studying the TWIC program for several months. Initially, the agency had planned to implement the TWIC program first, saying that any registered traveler program would be implemented after establishing the TWIC program. In recent months, congressional appropriations restrictions have caused TSA to postpone TWIC’s development. According to a senior agency official, however, TSA was still planning to go forward with studying the registered traveler program concept. Although most of the 22 stakeholders we interviewed supported a registered traveler program, several stakeholders opposed it. Our literature review and supporters of the program whom we interviewed identified two primary purposes for such a program—improving the quality and efficiency of airport security and reducing the inconvenience that some travelers have experienced by reducing uncertainties about the length of delay and the level of scrutiny they are likely to encounter. The literature we reviewed and more than a half-dozen of the 22 stakeholders we contacted suggested that such a program could help improve the quality and efficiency of security by allowing security officials to target resources at potentially higher risk travelers. Several stakeholders also indicated that it could reduce the inconvenience of heightened security measures for some travelers, thus encouraging Americans to fly more often, and thereby helping to improve the economic health of the aviation industry. Representatives of air traveler groups identified other potential uses of a registered traveler program that were not directly linked to improving aviation security, such as better tracking of frequent flier miles for program participants. Many of the 22 stakeholders we contacted and much of the literature we reviewed identified the improvement of aviation security as a key purpose for implementing a registered traveler program. Such a program would allow officials to target security resources at those travelers who pose a greater security risk or about whom little is known. This concept is based on the idea that not all travelers present the same threat to aviation security, and thus not everyone requires the same level of scrutiny. Our recent work on addressing homeland security issues also highlights the need to integrate risk management into the nation’s security planning and to target resources at high-priority risks. The concept is similar to risk- based security models that have already been used in Europe and Israel, which focus security on identifying risky travelers and more appropriately matching resources to those risks, rather than attempting to detect objects on all travelers. For example, one study suggested that individuals who had been prescreened through background checks and credentialed as registered travelers be identified as low risk and therefore subjected to less stringent security. This distinction would allow security officials to direct more resources and potentially better screening equipment at other travelers who might pose a higher security risk, presumably providing better detection and increased deterrence. In addition, several stakeholders also suggested that a registered traveler program would enable TSA to more efficiently use its limited resources. Several of these stakeholders suggested that a registered traveler program could help TSA more cost-effectively focus its equipment and personnel needs to better meet its security goals. For example, two stakeholders stated that TSA would generally not have to intensively screen registered travelers’ checked baggage with explosives detection systems that cost about $1 million each. As a result, TSA could reduce its overall expenditures for such machines. In another example, a representative from a major airline suggested that because registered travelers would require less stringent scrutiny, TSA could provide a registered traveler checkpoint lane that would enable TSA to use fewer screeners at its checkpoint lanes; this would reduce the number of passenger screeners from the estimated 33,000 that it plans to hire nationwide. In contrast, several stakeholders and TSA officials said that less stringent screening for some travelers could weaken security. For example, two stakeholders expressed concerns that allowing some travelers to undergo less stringent screening could weaken overall aviation security by introducing vulnerabilities into the system. Similarly, the first head of TSA had publicly opposed the program because of the potential for members of “sleeper cells”—terrorists who spend time in the United States building up a law-abiding record—to become registered travelers in order to take advantage of less stringent security screening. The program manager heading TSA’s Registered Traveler Task Force explained that the agency has established a baseline level of screening that all passengers and workers will be required to undergo, regardless of whether they are registered. Nevertheless, a senior TSA official told us that the agency now supports the registered traveler concept as part of developing a more risk- based security system, which would include a refined version of the current automated passenger prescreening system. While the automated prescreening system is used on all passengers, it focuses on those who are most likely to present threats. In contrast to a registered traveler program, the automated system is not readily apparent to air passengers. Moreover, the registered traveler program would focus on those who are not likely to present threats, and it would be voluntary. Some stakeholders we contacted said that a registered traveler program, if implemented, should serve to complement the automated system, rather than replace it. According to the literature we reviewed and our discussions with several stakeholders, reducing the inconvenience of security screening procedures implemented after September 11, 2001, constitutes another major purpose of a registered traveler program, in addition to potentially improving security. The literature and these stakeholders indicated that participants in a registered traveler program would receive consistent, efficient, and less intrusive screening, which would reduce their inconvenience and serve as an incentive to fly more, particularly if they are business travelers. According to various representatives of aviation and business travelers groups, travelers currently face uncertainty regarding the time needed to get through security screening lines and inconsistency about the extent of screening they will encounter at various airports. For example, one stakeholder estimated that prior to September 11, 2001, it took about 5 to 8 seconds, on average, for a traveler to enter, be processed, and clear a security checkpoint; since then, it takes about 20 to 25 seconds, on average, resulting in long lines and delays for some travelers. As a result, travelers need to arrive at airports much earlier than before, which can result in wasted time at the airport if security lines are short or significant time spent in security lines if they are long. Additionally, a few stakeholders stated that travelers are inconvenienced when they are subjected to personal searches or secondary screening at the gates for no apparent reason. While some stakeholders attributed reductions in the number of passengers traveling by air to these inconveniences, others attributed it to the economic downturn. Some literature and three stakeholders indicated that travelers, particularly business travelers making shorter trips (up to 750 miles), have as a result of these inconveniences reduced the number of flights they take or stopped flying altogether, causing significant economic harm to the aviation industry. For example, according to a survey of its frequent fliers, one major airline estimates that new airport security procedures and their associated inconveniences have caused 27 percent of its former frequent fliers to stop flying. Based on this survey’s data, the Air Transport Association, which represents major U.S. air carriers, estimates that security inconveniences have cost the aviation industry $2.5 billion in lost revenue since September 11, 2001. Supporters of a registered traveler program indicated that it would be a component of any industry recovery and that it is particularly needed to convince business travelers to resume flying. To the extent that registered travelers would fly more often, the program could also help revitalize related industries that are linked to air travel, including aviation-related manufacturing and such tourism-related businesses as hotels and travel agencies. However, not all stakeholders agreed that a registered traveler program would significantly improve the economic condition of the aviation industry. For example, officials from another major U.S. airline believed that the declining overall economy has played a much larger role than security inconveniences in reducing air travel. They also said that most of their customers currently wait 10 minutes or less in security lines, on average—significantly less than immediately after September 11, 2001—and that security inconveniences are no longer a major issue for their passengers. In addition to the two major purposes of a registered traveler program, some stakeholders and some literature we reviewed identified other potential uses. For example, we found that such a program could be part of an enhanced customer service package for travelers and could be used to expedite check-in at airports and to track frequent flier miles. Some stakeholders identified potential law enforcement uses, such as collecting information obtained during background checks to help identify individuals wanted by the police, or tracking the movement of citizens who might pose criminal risks. Finally, representatives of air traveler groups envisioned extensive marketing uses for data collected on registered travelers by selling it to such travel-related businesses as hotels and rental car companies and by providing registered travelers with discounts at these businesses. Two stakeholders envisioned that these secondary uses would evolve over time, as the program became more widespread. However, civil liberties advocates we spoke with were particularly concerned about using the program for purposes beyond aviation security, as well as about the privacy issues associated with the data collected on program participants and with tracking their movements. Our literature review and discussions with stakeholders identified a number of policy and implementation issues that might need to be addressed if a registered traveler program is to be implemented. Stakeholders we spoke with held a wide range of opinions on such key policy issues as determining (1) who should be eligible to apply to the program; (2) the type and the extent of background checks needed to certify that applicants can enroll in the program, and who should perform them; (3) the security screening procedures that should apply to registered travelers, and how these would differ from those applied to other travelers; and (4) the extent to which equity, privacy, and liability issues would impede program implementation. Most stakeholders indicated that only the federal government has the resources and authority to resolve these issues. In addition to these policy questions, our research and stakeholders identified practical implementation issues that need to be considered before a program could be implemented. These include deciding (1) which technologies to use, and how to manage the data collected on travelers; (2) how many airports and how many passengers should participate in a registered traveler program; and (3) which entities would be responsible for financing the program, and how much it would cost. Most stakeholders we contacted agreed that, ultimately, the federal government should make the key policy decisions on program eligibility criteria, requirements for background checks, and specific security- screening procedures for registered travelers. In addition, the federal government should also address equity, privacy, and liability issues raised by such a program. Stakeholders also offered diverse suggestions as to how some of these issues could be resolved, and a few expressed eagerness to work with TSA. Although almost all the stakeholders we contacted agreed that a registered traveler program should be voluntary, they offered a wide variety of suggestions as to who should be eligible to apply to the program. These suggestions ranged from allowing any U.S. or foreign citizen to apply to the program to limiting it only to members of airline frequent flier programs. Although most stakeholders who discussed this issue with us favored broad participation, many of them felt it should be limited to U.S. citizens because verifying information and conducting background checks on foreigners could be very difficult. Several stakeholders said that extensive participation would be desirable from a security perspective because it would enable security officials to direct intensive and expensive resources toward unregistered travelers who might pose a higher risk. Several stakeholders indicated that it would be unfair to limit the program only to frequent fliers, while representatives from two groups indicated that such a limitation could provide airlines an incentive to help lure these travelers back to frequent air travel. We also found differing opinions as to the type and extent of background check needed to determine whether an applicant should be eligible to enroll in a registered traveler program. For example, one stakeholder suggested that the background check should primarily focus on determining whether the applicant exists under a known identity and truly is who he or she claims to be. This check could include verification that an individual has paid income taxes over a certain period of time (for example, the past 10 years), has lived at the same residence for a certain number of years, and has a sufficient credit history. Crosschecking a variety of public and private data sources, such as income tax payment records and credit histories, could verify that an applicant’s name and social security number are consistent. However, access to income tax payment records would probably require an amendment to existing law. Another stakeholder said that the program’s background check should be similar to what is done when issuing a U.S. passport. A passport check consists, in part, of a name check against a database that includes information from a variety of federal sources, including intelligence, immigration, and child support enforcement data. In contrast, others felt that applicants should undergo a more substantial check, such as an FBI- type background check, similar to what current airline or federal government employees must pass; or a criminal background check, to verify that the applicant does not have a criminal history. This could include interviewing associates and neighbors as well as credit and criminal history checks. In this case, applicants with criminal histories might be denied the right to participate in a registered traveler program. No matter what the extent of these checks, most stakeholders generally agreed that the federal government should perform or oversee them. They gave two reasons for this: (1) the federal government has access to the types of data sources necessary to complete them, and (2) airlines would be unwilling to take on the responsibility for performing them because of liability concerns. One stakeholder also suggested that the federal government could contract out responsibility for background checks to a private company, or that a third-party, nonprofit organization could be responsible for them. A majority of stakeholders also agreed that the federal government should be responsible for developing the criteria needed to determine whether an applicant is eligible to enroll and for making the final eligibility determination. Some stakeholders also stated that background checks should result in a simple yes or no determination, meaning that all applicants who passed the background check would be able to enroll in the program and the ones who did not pass would be denied. Other stakeholders alternatively recommended that all applicants be assigned a security score, determined according to the factors found during the background check. This security score would establish the level of screening given an individual at a security checkpoint. TSA has indicated that, at a minimum, the government would have to be responsible for ensuring that applicants are eligible to enroll and that the data used to verify identities and perform background checks are accurate and up-to-date. All the stakeholders we contacted agreed that registered travelers should be subjected to some minimum measure of security screening, and that the level of screening designated for them should generally be less extensive and less intrusive than the security screening required for all other passengers. Most stakeholders anticipated that a participant would receive a card that possessed some unique identifier, such as a fingerprint or an iris scan, to identify the participant as a registered traveler and to verify his or her identity. When arriving at an airport security checkpoint, the registered traveler would swipe the card through a reader that would authenticate the card and verify the individual’s identity by matching him or her against the specific identifier on the card. If the card is authenticated and the holder is verified as a registered traveler, the traveler would proceed through security. Most stakeholders suggested that registered travelers pass through designated security lines, to decrease the total amount of time they spend waiting at the security checkpoint. If the equipment cannot read the card or verify the traveler’s identity, or if that passenger is deemed to be a security risk, then the traveler would be subjected to additional security screening procedures, which might also include full-body screening and baggage searches. If the name on the registered traveler card matches a name on a watch-list or if new concerns about the traveler emerge, the card could be revoked. A common suggestion was that registered travelers would undergo pre- September 11th security-screening measures, which involved their walking through a magnetometer and the x-raying of their carry-on baggage. Moreover, they would not be subjected to random selection or additional security measures unless warranted, and they would be exempted from random secondary searches at the boarding gate. According to TSA officials, the agency is willing to consider some differentiated security procedures for program participants. As for security procedures for those not enrolled in such a program, several stakeholders agreed that nonparticipants would have to undergo current security screening measures, at a minimum. Current security measures involve walking through a magnetometer, having carry-on baggage run through an x-ray machine, and being subjected to random searches of baggage for traces of explosives, hand searches for weapons, and the removal of shoes for examination. Travelers may also be randomly selected for rescreening in the gate area, although TSA has planned pilot programs to determine whether to eliminate this rescreening. Other stakeholders suggested that travelers who were not enrolled in the registered traveler program should be subjected to enhanced security screening, including more stringent x-rays and baggage screening than are currently in place at the airports. These stakeholders thought that because little would be known about nonparticipants, they should be subjected to enhanced security screening measures. In addition, several stakeholders mentioned that a registered traveler program might be useful in facilitating checked-baggage screening. For example, one stakeholder suggested that the x-ray screening of registered travelers’ baggage could be less intensive than the screening required for all other passengers, thus reducing the time it would take to screen all checked baggage. A few stakeholders even suggested that the most sophisticated baggage screening technology, such as explosives detection machines, would not be needed to screen a registered traveler’s checked baggage. However, the 2001 Aviation and Transportation Security Act requires the screening of all checked baggage, and using a registered traveler program to lessen the level of the checked baggage screening would not be permissible under the requirements of the Act. Finally, our research and discussions with stakeholders raised nonsecurity- related policy issues, including equity, privacy, and liability concerns that could impede implementation of a registered traveler program. With respect to equity issues, some stakeholders raised concerns that the federal government should carefully develop eligibility and enrollment criteria that would avoid automatically excluding certain classes of people from participating in the program. For example, requiring applicants to pay a high application or enrollment fee could deter some applicants for financial reasons. In addition, concern was expressed that certain races and ethnicities, mainly Arab-Americans, would be systematically excluded from program participation. Most stakeholders, however, did not generally view equity issues as being a major obstacle to developing the program, and one pointed to the precedent set by existing government programs that selectively confer known status to program participants. For example, the joint U.S./Canadian NEXUS pilot program, a program for travelers who frequently cross the U.S./Canadian border, is designed to streamline the movement of low-risk travelers across this border by using designated passage lanes and immigration-inspection booths, as well as some risk- management techniques similar to those proposed for use in a registered traveler program. With respect to privacy issues, civil liberties advocates we spoke with expressed concerns that the program might be used for purposes beyond its initial one and that participants’ information would need protection. They were particularly concerned about the potential for such a program to lead to the establishment of a national identity card, or to other uses not related to air travel. For example, some suggested that there could be enormous pressure on those who are not part of the program to apply, given the advantages of the program, and this would therefore, in effect, lead to a national identity card. One stakeholder raised a concern about the card’s becoming a prerequisite for obtaining a job that includes traveling responsibilities, or the collected information’s being used for other purposes, such as identifying those sought by police. Others countered that because participation in a registered traveler program would be voluntary, privacy concerns should not be a significant issue. According to TSA attorneys, legal protections already in place to prevent the proliferation of private information are probably applicable, and additional safeguards for this program could be pursued. Through our review, we identified two particular liability issues potentially associated with the concept of a registered traveler program. First, it is uncertain which entity would be liable and to what extent that entity would be liable if a registered traveler were to commit a terrorist act at an airport or on a flight. Second, it is also unclear what liability issues might arise if an applicant were rejected based on false or inaccurate information, or the applicant did not meet the eligibility criteria. For the most part, stakeholders who addressed the liability issue maintained that, because the federal government is already responsible for aviation security, and because it is likely to play an integral role in developing and administering such a program, security breaches by registered travelers would not raise new liability concerns. Although the assumption of screening responsibilities has increased the federal government’s potential exposure to liability for breaches of aviation security, TSA representatives were unsure what the liability ramifications would be for the federal government for security breaches or terrorist acts committed by participants of a registered traveler program. Fewer stakeholders offered views on whether there would be liability issues if an applicant were denied participation in a registered traveler program because of false or inaccurate information. However, some indicated that the federal government’s participation, particularly in developing eligibility criteria, would be key to mitigating liability issues. One stakeholder said that the program must include appeal procedures to specify under what conditions an individual could appeal if denied access to the program, who or what entity would hear an appeal, and whether an individual would be able to present evidence in his or her defense. Other stakeholders, however, stressed the importance of keeping eligibility criteria and reasons for applicant rejection confidential, because they believe that confidentiality would be crucial to maintaining the security of the program. TSA maintained that if the program were voluntary, participants might have less ability to appeal than they would in a government entitlement program, in which participation might be guaranteed by statute. In addition to key policy issues, some stakeholders we spoke with identified a number of key program implementation issues to consider. Specifically, they involve choosing appropriate technologies, determining how to manage data collection and security, defining the program’s scope, and determining the program’s costs and financing structure. Our research indicated that developing and implementing a registered traveler program would require key choices about which technologies to use. Among the criteria cited by stakeholders were a technology’s ability to (1) provide accurate data about travelers, (2) function well in an airport environment, and (3) safeguard information from fraud. One of the first decisions that would have to be made in this area is whether to use biometrics to verify the identity of registered passengers and, if so, which biometric identifier to use. The term “biometrics” refers to a wide range of technologies that can be used to verify a person’s identity by measuring and analyzing human characteristics. Identifying a person’s physiological characteristics is based on data derived from scientifically measuring a part of the body. Biometrics provides a highly accurate confirmation of the identity of a specific person. While the majority of those we interviewed said that some sort of biometric identifier is critical to an effective registered traveler program, there was little agreement among stakeholders as to the most appropriate biometric for this program. Issues to consider when making decisions related to using biometric technology include the accuracy of a specific technology, user acceptance, and the costs of implementation and operation. Although there is no consensus on which biometric identifier should be used for a registered traveler program, three biometric identifiers were cited most frequently as offering the requisite capabilities for a program: iris scans (using the distinctive features of the iris), fingerprints, and hand geometry (using distinctive features of the hand). Although each of the three identifiers has been used in airport trials, there are disadvantages associated with each of them. (Appendix III outlines some of the advantages and disadvantages of each.) A few stakeholders also claimed that a biometric should not be part of a registered traveler program. Among the reasons cited were that biometric technology is expensive, does not allow for quick processing of numerous travelers, and is not foolproof. Some studies conducted have concluded that current biometric technology is not as infallible as biometric vendors claim. For example, a German technology magazine recently demonstrated that using reactivated latent images and forgeries could defeat fingerprint and iris recognition systems. In addition, one stakeholder stated that an identity card with a two-dimensional barcode that stores personal data and a picture would be sufficient to identify registered travelers. Such a card would be similar to those currently used as drivers’ licenses in many states. In addition to choosing specific technologies, stakeholders said that decisions will be needed regarding the storage and maintenance of data collected for the program. These include decisions regarding where a biometric or other unique identifier and personal background information should be stored. Such information could be stored either on a card embedded with a computer chip or in a central database, which would serve as a repository of information for all participants. Stakeholders thought the key things to consider in deciding how to store this information are speed of accessibility, levels of data protection, methods to update information, and protections against forgery and fraudulent use by others. One stakeholder who advocates storing passenger information directly on a “smart” card containing an encrypted computer chip said that this offers more privacy protections for enrollees and would permit travelers to be processed more quickly at checkpoints than would a database method. On the other hand, advocates for storing personal data in a central database said that it would facilitate the updating of participants’ information. Another potential advantage of storing information in a central database is that it could make it easier to detect individuals who try to enroll more than once, by checking an applicant’s information against information on all enrollees in a database. In theory, this process would prevent duplication of enrollees. Another issue related to storing participant information is how to ensure that the information is kept up-to-date. If participant information is stored in a database, then any change would have to be registered in a central database. If, however, information is stored on an identification card, then the card would have to feature an embedded computer chip to which changes could be made remotely. Keeping information current is necessary to ensure that the status of a registered traveler has not changed because of that person’s recent activities or world events. One stakeholder noted the possibility that a participant could do something that might cause his or her eligibility status to change. In response to that concern, he stressed that a registered traveler program should incorporate some sort of “quick revoke” system. When that traveler is no longer entitled to the benefits associated with the program, a notification would appear the next time the card is registered in a reader. Stakeholders differed in their opinions as to how many airports and how many passengers should participate in a registered traveler program. While some believe that the program should be as expansive as possible, others maintain that the program would function most efficiently and cost- effectively if it were limited to those airports with the most traffic and to those passengers who fly the most frequently. As for airports, some suggested that all 429 airports subject to security requirements in the United States should be equipped to support the program, to convince more passengers to enroll. Others contended that, because of equipment costs, the program should optimally include only the largest airports, such as the fewer than 100 airports that the FAA classifies as Category X and Category 1 airports, which the vast majority of the nation’s air travelers use. There were also different opinions as to whether the program should limit enrollment to frequent travelers or should strive for wider enrollment to maximize participation. Representatives of a passenger group asserted that the program should be limited to passengers who fly regularly because one of the goals of the program would be to process known passengers more quickly, and that having too many enrollees would limit the time saved. Others, however, maintained that the program should enroll as many passengers as possible. This case is made largely based on security concerns—the more people who register, the more information is known about a flight’s passengers. It is unclear who would fund any registered traveler program, although a majority of the stakeholders we contacted who discussed the issue expect that participants would have to fund most of its costs. Representatives of aviation traveler groups said that participants would be willing to bear almost all of the costs. One airline representative estimated that frequent passengers would be willing to pay up to $100 for initial enrollment and an additional $25 to $50 annually for renewal. For similar reasons, some stakeholders have suggested that the airlines bear some of the costs of the program, probably by offering subsidies and incentives for their passengers to join, since the aviation industry would also benefit. For instance, one stakeholder said that airlines might be willing to partially subsidize the cost if the airlines could have access to some of the participant information. A few stakeholders also expect that the federal government would pay for some of the cost to develop a registered traveler program. One stakeholder who said the government should pay for a significant portion of the program did so based on the belief that national security benefits will accrue from the program and so, therefore, funding it is a federal responsibility. Others maintained that significant long-term federal funding for the program is unrealistic because of the voluntary aspect of the program, the possibility that it might be offered only to selected travelers, and TSA’s current funding constraints. In addition to the uncertainty about which entity would primarily fund a registered traveler program, there are also questions about how much the program would cost. None of the stakeholders who were asked was able to offer an estimate of the total cost of the program. A technology vendor who has studied this type of program extensively identified several primary areas of cost, which include but are not limited to background checks, computer-chip–enabled cards, card readers, biometric readers, staff training, database development, database operations, and enrollment center staffing. The fact that the costs of many of these components are uncertain makes estimating the overall program costs extremely difficult. For example, one stakeholder told us that extensive background checks for enrollees could cost as much as $150 each, while another stakeholder maintained that detailed, expensive background checks would be unnecessary. Therefore, the choice of what type of background check to use if a program is implemented would likely significantly influence the program’s overall costs. Our research indicated that there are also significant price range differences in computer-chip–enabled cards and biometric readers, among other components. Regardless of the policy and program decisions made about a registered traveler program, we identified several basic principles TSA might consider if it implements such a program. We derived these principles from our discussions with stakeholders and from review of pertinent literature as well as best practices for implementing new programs. Chief among these is the principle that vulnerabilities in the aviation system be assessed in a systematic way and addressed using a comprehensive risk management plan. Accordingly, the registered traveler program must be assessed and prioritized along with other programs designed to address security vulnerabilities, such as enhancing cockpit security, controlling access to secure areas of the airport, preventing unsafe items from being shipped in cargo or checked baggage, and ensuring the integrity of critical air traffic control–computer systems. TSA officials also noted that the agency is responsible for the security of all modes of transportation, not just aviation. They added that a program such as registered traveler needs to be assessed in the broader context of border security, which can include the security of ports and surface border crossings overseen by a number of federal agencies, such as Customs, Coast Guard, and INS. TSA might consider the following principles if, and when, a registered traveler program is implemented: Apply lessons learned from and experience with existing programs that share similarities with the registered traveler program. This information includes lessons related to such issues as eligibility criteria, security procedures, technology choices, and funding costs. Test the program initially on a smaller scale to demonstrate its feasibility and effectiveness, and that travelers will be willing to participate. Develop performance measures and a system for assessing whether the program meets stated mission and goals. Use technologies that are interoperable across different enrollment sites and access-control points, and select technologies that can readily be updated to keep pace with new developments in security technology, biometrics, and data sharing. At a minimum, interoperability refers to using compatible technologies at different airport checkpoints across the country and, more broadly, could be seen as including other access- control points, such as border crossings and ports of entry. Using lessons learned from existing programs offers TSA an opportunity to identify key policy and implementation issues as well as possible solutions to them. Although not of the scope that a nationwide U.S. registered traveler program would likely be, several existing smaller programs, both in the United States and abroad, address some of the same issues as the registered traveler concept and still present excellent opportunities for policymakers to learn from real-life experiences. For example, in the United States, the INS already has border control programs both at airports and roadway checkpoints to expedite the entry of “known” border crossers. Internationally, similar programs exist at Ben Gurion Airport in Israel, Schiphol Airport in Amsterdam, and Dubai International Airport in the United Arab Emirates. In the past, similar pilot programs have also been run at London’s Gatwick and Heathrow airports. All of these programs rely on credentialing registered travelers to expedite their processing and are candidates for further study. Finally, programs established by the Department of Defense and the General Services Administration that use cards and biometrics to control access to various parts of a building offer potential technology-related lessons that could help design a registered traveler program. (Appendix IV offers a brief description of some of the U.S. and foreign programs.) TSA’s program manager for the Registered Traveler Task Force stressed that his agency has no role in these other programs, which are different in purpose and scope from the registered traveler concept. He added that these programs focus on expediting crossing at international borders, while the registered traveler concept focuses on domestic security. In addition to these programs, information could also be gleaned from a registered traveler pilot program. For example, the Air Transport Association has proposed a passenger and employee pilot program. ATA’s proposed program would include over 6,000 participants, covering both travelers who passed a background check and airline employees. ATA’s proposal assumes that (1) the appropriate pool of registered traveler participants will be based on background checks against the FBI/TSA watch list, and (2) airlines would determine which employees could apply, and would initiate background checks for them. ATA estimates that the pilot program would initially cost about $1.2 million to implement. To allow TSA and the airlines to evaluate the effectiveness of the program’s technologies and procedures and their overall impact on checkpoint efficiency, ATA plans to collect data on enrollment procedures, including: the number of individuals who applied and were accepted, the reasons for rejection, and customer interest in the program; reliability of the biometric cards and readers; and checkpoint operational issues. In our discussions, the Associate Under Secretary for Security Regulation and Policy at TSA made it clear that he thought developing a registered traveler pilot program on a small scale would be a necessary step before deciding to implement a national program. TSA officials responsible for assessing a registered traveler program said that they hope to begin a pilot program by the end of the first quarter of 2003. They also noted that much of the available information about the registered traveler concept is qualitative, rather than quantitative. They added that, because the cost- effective nature of a registered traveler program is not certain, a financial analysis is needed that considers the total cost of developing, implementing, and maintaining the technology and the program. Along these lines, they believe that a pilot program and rigorous, fact-based analysis of the costs and benefits of this program will be useful for determining (1) whether the hassle factor really exists, and if so to what extent, (2) whether a registered traveler program will effectively address the need to expedite passenger flow or to manage risk, and (3) whether such a program would be the risk-mitigation tool of choice, given the realities of limited resources. In addition to developing performance-based metrics to evaluate the effectiveness of a pilot program, TSA could consider developing similar metrics to measure the performance of a nationwide program if one is created. Our previous work on evaluating federal programs has stressed the importance of identifying goals, developing related performance measures, collecting data, analyzing data, and reporting results. Collecting such information is most useful if the data-gathering process is designed during the program’s development and initiated with its implementation. Periodic assessment of the data should include comparisons with previously collected baseline data. The implementation of a registered traveler program could be helped by following those principles. For example, determining whether, and how well, the program improves aviation security and alleviates passenger inconvenience requires that measurements be developed and data collected and analyzed to demonstrate how well these goals are being met. Such information could include the success of screeners at detecting devices not allowed on airplanes for both enrollees and nonparticipants, or the average amount of time it takes for enrollees to pass through security screening. An effective registered traveler program depends on using technologies that are interoperable across various sites and with other technologies, and can be readily updated to keep pace with new developments in security technology, biometrics, and data sharing. Such a program is unlikely to be airport- or airline-specific, which means that the various technologies will have to be sufficiently standardized for enrollees to use the same individual cards or biometrics at many airports and with many airlines. Consequently, the technologies supporting the nationwide system need to be interoperable so that they can communicate with one another. The FAA’s experience with employee access cards offers a good lesson on the dangers of not having standards to ensure that technologies are interoperable. As we reported in 1995, different airports have installed different types of equipment to secure doors and gates. While some airports have installed magnetic stripe card readers, others have installed proximity card readers, and still another has installed hand-scanning equipment to verify employee identity. As a result, an official from one airline stated that employees who travel to numerous airports have to carry several different identity cards to gain access to specific areas. Another important interoperability issue is the way in which the personal data associated with a registered traveler program relates to other existing information on travelers, most important of which is the automated passenger prescreening system information. Some stakeholders believe it will be crucial that the registered traveler program is integrated into the automated system. Given TSA’s focus on developing and launching a revised automated passenger prescreening system, such integration will likely be essential. Integrating the data depends on finding a workable technology solution. Furthermore, TSA officials added that interoperability may extend beyond aviation to passengers who enter the United States at border crossings or seaports. They noted that ensuring the interoperability of systems across modes of transportation overseen by a variety of different federal agencies will be a complex and expensive undertaking. An equally important factor to consider is how easily a technology can be upgraded as related technologies evolve and improve. As stakeholders made clear to us, because technologies surrounding identification cards and biometrics are evolving rapidly, often in unpredictable ways, the technology of choice today may not be cost-effective tomorrow. To ensure that a registered traveler program will not be dependent on outdated technologies, it is essential to design a system flexible enough to adapt to new technological developments as they emerge. For example, if fingerprints were initially chosen as the biometric, the supporting technologies should be easily adaptable to other biometrics, such as iris scans. An effective way to make them so is to use technology standards for biometrics, data storage, and operating systems, rather than to mandate specific technology solutions. A registered traveler program is one possible approach for managing some of the security vulnerabilities in our nation’s aviation and broader transportation systems. However, numerous unresolved policy and programmatic issues would have to be addressed before developing and implementing such a program. These issues include, for example, the central question of whether such a program will effectively enhance security or will inadvertently provide a means to circumvent and compromise new security procedures. These issues also include programmatic and administrative questions, such as how much such a program would cost and what entities would provide its financing. Our analysis of existing literature and our interviews with stakeholders helped identify some of these key issues but provide no easy answers. The information we developed should help to focus and shape the debate and to identify key issues to be addressed when TSA considers whether to implement a registered traveler program. We provided the Department of Transportation (DOT) with a draft of this report for review and comment. DOT provided both oral and written comments. TSA’s program manager for the Registered Traveler Task Force and agency officials present with legal and other responsibilities related to this program said that the report does an excellent job of raising a number of good issues that TSA should consider as it evaluates the registered traveler concept. These officials provided a number of clarifying comments, which we have incorporated where appropriate. Unless you publicly announce its contents earlier, we plan no further distribution of this report until 15 days from the date of this letter. At that time, we will send copies of this report to interested Members of Congress, the Secretary of Transportation, and the Under Secretary of Transportation for Security. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3650. I can also be reached by E-mail at [email protected]. Key contributors are listed in appendix V. To obtain and develop information on the purpose of a registered traveler program and the key policy and implementation issues in designing and implementing it, we conducted an extensive search of existing information and carried out interviews with key stakeholders. These interviews included officials from the federal government, the aviation industry, aviation security consultants, vendors developing and testing registered traveler applications, and organizations concerned with issues of data privacy and civil liberties. We conducted a literature search that identified existing studies, policy papers, and articles from the federal government, the aviation industry, and other organizations on numerous issues associated with designing and implementing a registered traveler program. These issues included the goals or purposes of a registered traveler program and policy and programmatic issues such as the potential costs, security procedures, and technology choices for such a program. We also identified existing studies and papers on specific items, such as the applicability of biometric technologies for use in a registered traveler program and the extent to which programs already exist in the United States and abroad (this detailed information is presented in appendix IV). This literature search also identified key stakeholders regarding designing and implementing a registered traveler program. Based on our literature search, we identified a list of 25 key stakeholders who could provide professional opinions on a wide range of issues involved in a registered traveler program. We chose these stakeholders based on their influence in the aviation industry as well as their expertise in such issues as aviation security, identification technologies, civil liberties, and the air-travel experience. In total, we conducted 22 interviews. We also visited and interviewed officials associated with registered traveler–type programs in two European countries. The intent of our interviews was to gain a further understanding of the issues surrounding a registered traveler program and specific information on such items as the potential costs for implementing a registered traveler program and the technology needs of such a program. In conducting our interview process, we developed a standard series of questions on key policy and implementation issues, sent the questions to the stakeholders in advance, and conducted the interviews. We then summarized the interviews to identify any key themes and areas of consensus or difference on major issues. We did not, however, attempt to empirically validate the information provided to us by stakeholders through these interviews. To identify basic principles that TSA should consider if it decides to implement a registered traveler program, we analyzed existing studies to identify overriding themes that could impact the policy or implementation of such a program. We also analyzed the results of our interviews, to generate a list of key principles. We performed our work from July 2002 through October 2002 in accordance with generally accepted government auditing standards. The International Biometrics Group considers four types of biometric identifiers as the most suitable for air-travel applications. These identifiers are fingerprint recognition, iris recognition, hand geometry, and facial recognition. Each of these biometrics has been employed, at least on a small scale, in airports worldwide. The following information describes how each biometric works and compares their functionality. This technology extracts features from impressions made by the distinct ridges on the fingertips. The fingerprints can be either flat or rolled. A flat print captures only an impression of the central area between the fingertip and the first knuckle; a rolled print captures ridges on both sides of the finger. The technology is one of the best known and most widely used biometric technologies. This technology is based on the distinctly colored ring surrounding the pupil of the eye. The technology uses a small, high-quality camera to capture a black-and-white high-resolution image of the iris. It then defines the boundaries of the iris, establishes a coordinate system over the iris, and defines the zones for analysis within that coordinate system. Made from elastic connective tissue, the iris is a very plentiful source of biometric data, having approximately 450 distinctive characteristics. This technology measures the width, height, and length of the fingers, distances between joints, and shapes of the knuckles. The technology uses an optical camera and light-emitting diodes with mirrors and reflectors to capture three-dimensional images of the back and sides of the hand. From these images, 96 measurements are extracted from the hand. Hand geometry systems have been in use for more than 10 years for access control at facilities ranging from nuclear power plants to day care centers. This technology identifies people by areas of the face not easily altered— the upper outlines of the eye sockets, the areas around the cheekbones, and the sides of the mouth. The technology is typically used to compare a live facial scan with a stored template, but it can also be used to compare static images, such as digitized passport photographs. Facial recognition can be used in both verification and identification systems. In addition, because facial images can be captured from video cameras, facial recognition is the only biometric that can also be used for surveillance purposes. To improve border security and passenger convenience. Passengers from European Union, Norway, Iceland, and Liechtenstein. In the enrollment phase, the traveler is qualified and registered. This process includes a passport review, background check, and iris scan. All collected information is encrypted and embedded on a smart card. 2,500 passengers have enrolled in the program. In the traveling phase, the passenger approaches a gated kiosk and inserts the smart card in a card reader. The system reads the card and allows valid registered travelers to enter an isolated area. The passenger then looks into an iris scan camera. If the iris scan matches the data stored on the card, the passenger is allowed to continue through the gate. If the system cannot match the iris scan to the information on the card, the passenger is directed to the regular passport check lane. As of October 1, 2002, there is a 99-119 Euro ($97–$118) annual fee for participating passengers. According to program officials, the entire automatic border passage procedure is typically completed in about 10–15 seconds. The system can process four to five people per minute. There are plans to expand the program so that airlines and airports can use it for passenger identification and for tracking such functions as ticketing, check-in, screening, and boarding. There are also plans to develop components of the technology to provide secure-employee and staff access to restricted areas of travel and transportation facilities. To expedite passenger processing at passport control areas. Israeli citizens and frequent international travelers. Travelers who have dual U.S./Israel citizenship can take advantage of the Ben Gurion program, as well as the INS’s INSPASS program. During enrollment, applicants submit biographic information and biometric hand geometry. Applicants also receive an in-depth interview. Approximately 80,000 Israeli citizens have enrolled in the program. During arrival and departure, participants use a credit card for initial identification in one of 21 automated inspection kiosks at the airport. The participant then places his or her hand in the hand reader for identity verification. If verified, the system prints a receipt, which allows the traveler to proceed through a system-controlled gate. If the person’s identity cannot be verified, the individual is referred to an inspector. $20–$25 annual membership fee for participants. According to program officials, the entire automated verification process takes 20 seconds. Passport control lines at Ben Gurion airport can take up to 1 hour. The program allows airport personnel to concentrate on high-risk travelers, reduces bottlenecks with automated kiosks, improves airport cost-effectiveness, generates new revenue for the airport authority, and expands security capabilities at other Israeli borders. To expedite passenger processing at passport control. Non-United Kingdom, non-European Union, non-visa frequent travelers (mostly American and Canadian business travelers) originating from John F. Kennedy International Airport or Dulles International Airport on Virgin Atlantic or British Airways. To enroll, participants record their iris images with EyeTicket, have their passports scanned, and submit to a background check with U.K. immigration. 900 of 1,000 applicants were approved for participation; 300 enrolled. Upon arrival in London, participants are able to bypass the regular immigration line and proceed through a designated border entry lane. Participants look into an iris scan camera, and the image is compared against the scan taken at enrollment. If the two iris images match, participants are able to proceed through immigration. There were no user fees associated with the pilot program. According to EyeTicket, the average processing time per passenger is 12 seconds. Completed. Six-month trial ran from January 31, 2002, to July 31, 2002. IP@SS (Integrated Passenger Security System) Newark International Airport, Newark, New Jersey (Continental Airlines); Gatwick Airport, London, England (Delta Airlines) To expedite and simplify the processes of passenger identification and security screening. In June 2002, 6,909 passengers were processed through IP@SS. Officials report that about 99 percent of passengers volunteered for the program. Continental Airlines has two kiosks for tourist class, one for business and first classes, and one at the Continental gate for flights between Newark and Tel Aviv. Each station is staffed with a trained security agent who asks passengers for travel documents, including the individual’s passport, which is scanned by an automated reader. After being cleared, the passenger can enroll in a biometric program in which biometric information is transferred to a smart card. The passenger then takes the card to the boarding gate and inserts it into the card reader and inserts fingers into the reader. If the information corresponds with the information contained on the smart card, the passenger is cleared to board the plane. Cards are surrendered to program officials after each use, and the information is scrambled to prevent misuse. There were no user fees associated with the pilot programs. Ongoing. ICTS International plans to launch pilot programs at other U.S. and European airports. The pilot programs at Newark and Gatwick are technology demonstrations and are used only to aid in the departure process. ICTS may test a “sister city” concept, in which the participant can take the card to his or her destination to aid in the deplaning/arrival process there. To expedite border crossings for low-risk frequent commuters. CANPASS is a project of the Canada-U.S. Shared Border Accord. Citizens and permanent residents of the United States and Canada are eligible to participate in the CANPASS program. As part of the application process, an applicant provides personal identification, vehicle identification, and driver’s license information. Background checks are performed on all applicants. As of October 1, 2001, there were approximately 119,743 participants in the CANPASS program. Technology varies from site to site. At Douglas, the participant receives only a letter of authorization and a windshield decal; at Windsor, a participant receives a photo ID card. A participant receives a letter of authorization and a windshield decal, which can be used only on a vehicle registered in the CANPASS system. When a vehicle enters the lane, a license plate reader reads the plate on the car. Membership in the CANPASS program is validated with data available through the license plate reader and other sources. At the applicable crossings, a participant must show the CANPASS identification card to the border inspector. There are no fees associated with the CANPASS system. The CANPASS Highway program was closed as a result of the events of September 11, 2001; however, the program is still currently available at the Whirlpool Bridge in Niagara Falls, Ontario. The CANPASS program operates in conjunction with the SENTRI/PORTPASS program. SENTRI/PORTPASS (Secure Electronic Network for Travelers’ Rapid Inspection/Port Passenger Accelerated Service System) Detroit, Michigan; Buffalo, New York; El Paso and Hidalgo, Texas; Otay Mesa and San Ysidro, California Citizens and permanent residents of the United States and Canada and certain citizens and non-immigrants of Mexico are eligible to apply for program participation. Applicants must undergo an FBI background check, an Interagency Border Inspection System (IBIS) check, vehicle search, and personal interview prior to participation. Applicants must provide evidence of citizenship, residence, and employment or financial support. Fingerprints and a digital photograph are taken at the time of application. If cleared for enrollment, the passenger receives an identification card and a transponder, which must be installed in the registered vehicle. During 2000, approximately 792 participants were registered for the Detroit program, and 11,700 were registered for the Otay Mesa program. Transponders and magnetic card readers recall electronic photographs of registered drivers and their passengers. Images are presented on a monitor for border inspectors to visually confirm participants. Participants use designated SENTRI lanes to cross the border. The system automatically identifies the vehicles and the participants authorized to use the program. Border inspectors compare digitized photographs that appear on computer screens in the inspectors’ booths with the vehicles’ passengers. There is no charge for the U.S./Canada program. The SENTRI program for the United States and Mexico is $129 ($25 enrollment fee per person, $24 fingerprinting fee, and $80 systems fee). According to an El Paso INS official, delays in border crossing are typically around 60–90 minutes, but can be more than 2 hours. The SENTRI lane at a bridge border crossing has wait times of no more than 30 minutes. According to program officials, in Otay Mesa, CA, SENTRI participants wait approximately 4– 5 minutes in the inspection lane, while nonparticipants can wait up to 3 hours in a primary inspection lane. To expedite border crossings for low-risk frequent commuters. NEXUS is a pilot project of the Canada-U.S. Shared Border Accord. Canadian and U.S. lawful, national, and permanent residents are eligible to apply for program participation. Applicants complete an application that is reviewed by the U.S. Customs Service, INS, Canada Customs and Revenue Service, and Citizenship and Immigration, Canada. Applicants are required to provide proof of citizenship and residency, employment authorizations, and visas. Background checks are performed by officials of both countries. Participants must also provide a fingerprint biometric of two index fingers, which is verified against an INS database for any American immigration violations. (Unlike the CANPASS/PORTPASS programs, NEXUS is a harmonized border-crossing program with common eligibility requirements, a joint enrollment process, and a common application and identity card.) Since 2000, program administrators have issued 4,415 identification cards to participants. Enrollees must provide a two-finger print biometric. Photo identification cards are given to all participants. The NEXUS identification card allows participants to use NEXUS-designated lanes in the United States and Canada and to cross the border without routine customs and immigration questioning. A nonrefundable processing fee of $80 Canadian or $50 U.S. must be paid every 5 years. According to a study on the NEXUS Program, participants can save 20 minutes, compared with using the regular primary inspection lanes. Officials may request full fingerprints to verify identity. The two-finger print biometric or full prints may be shared with other government and law enforcement agencies. In addition, any personal information provided will also be shared with other government and law enforcement agencies. Additional crossing points are scheduled to open in 2003. INSPASS (INS Passenger Accelerated Service System)/CANPASS Airport Detroit, Michigan; Los Angeles, California; Miami, Florida; Newark, New Jersey; New York, New York; San Francisco, California; Washington, D.C.; Vancouver and Toronto, Canada To decrease immigration inspection for low-risk travelers entering the U.S. via international flights. Employed at seven airports in the United States (Detroit, Los Angeles, Miami, Newark, New York (JFK), San Francisco, Washington-Dulles) and at U.S. pre-clearance sites in Canada, in Vancouver and Toronto. INSPASS enrollment is open to all citizens of the United States, Canada, Bermuda, and visa-waiver countries who travel to the United States on business three or more times a year for short visits (90 days or less). INSPASS is not available to anyone with a criminal record or to aliens who are not otherwise eligible to enter the United States. The enrollment process involves capturing biographical information, hand geometry biometric data and facial picture and digital fingerprint information. A background check is done automatically for the inspector and, if approved, a machine-readable card is created for the traveler. The entire enrollment process typically takes 30–40 minutes. Over 98,000 enrollments have been performed in INSPASS, of which 37,000 are active as of September 2001. Once enrolled, the traveler is able to use an automated kiosk at passport control. A traveler is required to swipe the INSPASS card, enter flight information on a touchscreen, verify hand geometry, and complete a security check. Upon successful inspection, a receipt is printed that allows the traveler to proceed to U.S. Customs. Presently, there are no system cost fees or filing fees associated with INSPASS. The CANPASS Airport program has been suspended since September 11, 2001, and will be replaced by the Expedited Passenger Processing System in 2003. INSPASS is being reworked and plans for a new version are under way. Key contributors to this assignment were Jean Brady, David Dornisch, David Goldstein, David Hooper, Bob Kolasky, Heather Krause, David Lichtenfeld, and Cory Roman. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading.
The aviation industry and business traveler groups have proposed the registered traveler concept as a way to reduce long waits in airport security lines caused by heightened security screening measures implemented after the September 11 terrorist attacks. In addition, aviation security experts have advocated this concept as a way to better target security resources to those travelers who might pose greater security risks. The Aviation and Transportation Security Act of November 2001 allows the Transportation Security Administration (TSA) to consider developing a registered traveler program as a way to address these two issues. GAO completed this review to inform Congress and TSA of policy and implementation issues related to the concept of a registered traveler program. Under a variety of approaches related to the concept of a registered traveler program proposed by industry stakeholders, individuals who voluntarily provide personal background information and who clear background checks would be enrolled as registered travelers. Because these individuals would have been pre-screened through the program enrollment process, they would be entitled to expedited security screening procedures at the airport. Through a detailed literature review and interviews with stakeholders, GAO found that a registered traveler program is intended to reduce the inconvenience many travelers have experienced since September 11 and improve the quality and efficiency of airport security screening. Although GAO found support for this program among many stakeholders, GAO also found concerns that such a program could create new aviation security vulnerabilities. GAO also identified a series of key policy and program implementation issues that affect the program, including (1) Criteria for program eligibility; (2) Level of background check required for participation; (3) Security-screening procedures for registered travelers; (4) Technology options, including the use of biometrics to verify participants; (5) Program scope, including the numbers of participants and airports; and (5) Program cost and financing options. Stakeholders offered many different options on how best to resolve these issues. Finally, GAO identified several best practices that Congress and TSA may wish to consider in designing and implementing a registered traveler program. GAO concluded that a registered traveler program is one possible approach for managing some of the security vulnerabilities in our nation's aviation systems. However, decisions concerning key issues are needed before developing and implementing such a program. TSA felt that GAO's report offered a good overview of the potential and the challenges of a registered traveler program. The agency affirmed that there are no easy answers to some of the issues that GAO raised and that these issues need more study.
The Federal Aviation Act of 1958, as amended, gives DOT responsibility for promoting new airlines’ operations, while at the same time determining whether applicants proposing to provide air transportation services for compensation or hire meet federal economic and safety standards before commencing operations. Within DOT, this responsibility is shared by OST and FAA. All applicants must obtain separate authorization from both offices before starting their operations. OST’s Certification Process When OST receives an application, it administers a three-part test to determine whether the applicant is “fit, willing, and able” to properly perform the proposed services. First, OST assesses whether the applicant’s key personnel and management team as a whole possess the background and experience necessary to perform the proposed operations. Second, it reviews the applicant’s operating and financial plans to determine whether the applicant has access to or a plausible plan for raising sufficient funds to pay all of its start-up expenses and maintain a working capital reserve equal to 3 months’ normal operating costs. Finally, it reviews the applicant’s compliance record to determine whether the applicant or its key personnel have a history of safety violations or consumer fraud and may thus pose a risk to the traveling public, or whether other factors indicate that the applicant would not be likely to comply with federal rules, laws, and directives. If OST finds that the applicant meets these criteria, it issues a “show cause” order tentatively finding the applicant fit to operate. Interested parties, including competitor airlines and members of the public, are given an opportunity to raise concerns or objections about the applicant’s fitness to conduct the proposed operation. If no objections are filed that convince OST that its tentative findings were incorrect, it will issue a “final” order finding the applicant fit. Even so, the authority to begin the proposed operation will not be granted until the applicant submits the required (1) Air Carrier Certificate and Operations Specifications from FAA; (2) evidence that it has liability insurance coverage for each of its aircraft; (3) information on any changes in financing, ownership, key personnel, or management since the initial determination of fitness; and (4) verification that it has sufficient funds to meet OST’s financial criteria. FAA uses a five-phase process to determine whether an applicant’s manuals, aircraft, facilities, and personnel meet federal safety standards. First, in the preapplication phase, FAA gives the applicant basic information about the agency’s certification process and assigns a team of inspectors to meet with the applicant to discuss the proposed operation. Second, in the formal application phase, the applicant must submit all required documents, including a letter of application, operations and maintenance manuals, training curriculums, and personnel résumés documenting key personnel’s managerial and technical skills. Third, in the document compliance phase, FAA inspectors review the documents to determine whether they comply with applicable safety regulations and operating practices. Fourth, in the demonstration and inspection phase, the inspectors conduct on-site inspections of the applicant’s aircraft and maintenance facilities; observe proposed training programs; review maintenance, operations, and record-keeping procedures; and review actual in-flight operations. Finally, in the certification phase, FAA issues an Air Carrier Certificate and approves the applicant’s operations specifications. We found that many applicants do not successfully complete OST’s and FAA’s certification processes and, therefore, cannot begin flight operations. From January 1990 through July 1995, 180 applicants filed with OST to begin new airline operations. Ninety of the 180 applicants successfully completed OST’s and FAA’s processes and began operations. Of these 90, 57 were operating as of July 1995, while 33 began flying but ultimately ceased operations for a variety of reasons, such as insufficient revenues and competition from other airlines. As shown in figure 1, 33 of the remaining 90 applicants were tentatively found fit by OST but either never began operations, primarily because they lacked the financial resources necessary to carry out the proposed operations, or are still attempting to complete their financing or finish FAA’s certification process before they can begin operations. Another 47 applicants had withdrawn their applications or had them dismissed or denied by OST because the applicants were unable to meet its fitness standards. Ten applications were pending OST’s approval. Authorized and Operating (57) Authorized but Ceased Operations (33) Tentatively Found Fit but Never Began Operations (33) OST analysts and FAA headquarters officials told us that several factors determine whether an applicant successfully completes both offices’ processes. These factors include the completeness of the initial application, the applicant’s managerial skills and technical knowledge about operating an airline, and the applicant’s ability to obtain sufficient funds to meet OST’s financial criteria. Furthermore, the analysts told us that the majority of the applicants that do not complete the processes or never begin operations do not acquire the financial resources necessary to cover the start-up costs for their proposed operations. While OST’s and FAA’s certification processes are designed to ensure that new airlines meet federal economic and safety requirements, we found that the processes contained some inefficiencies that resulted in spending federal resources on applicants that had little probability of successfully completing the processes and beginning operations. Specifically, OST determined some applicants to be financially fit before they had sufficient funds to complete both certification processes. Because a significant amount of resources is spent on applicants that never complete the certification processes, FAA recently revised its process to require applicants to complete certain tasks before it will expend resources on other certification activities. Additionally, OST tightened its financial standards by requiring applicants to submit third-party verification of their financial plans with their applications. And together, OST and FAA have established an electronic communications link to better share information about applicants. To determine financial fitness, OST requires applicants to submit financial plans that show they have a plausible plan for raising the capital needed to conduct the proposed services. Only after the applicants receive FAA’s certification—but before OST gives them the authority to operate—are they required to verify that they actually have sufficient funds to meet OST’s financial criteria for beginning and sustaining their proposed operations. OST officials indicated that they require only a financial plan and not actual funds on hand because some applicants are unable to obtain funds from financial institutions or other investors unless they can show that OST has found them fit. As a result, applications can proceed far into FAA’s certification process before they are terminated or suspended because of the applicants’ inability to raise the needed capital. Consequently, hundreds of hours of FAA inspectors’ time can be expended on certification efforts before it is known that the applicants are unable to obtain the needed funds. According to OST analysts, the primary reason that 33 applicants tentatively found fit had never begun or had not yet begun operations was that they were unable or are still trying to obtain the funds necessary to meet OST’s financial criteria and complete FAA’s process. Although the analysts routinely give applicants additional time to raise money, many still do not acquire the needed funds because their funding plans fall through or the market conditions change. For example, OST found an applicant fit on the basis of its proposal to raise about 98 percent of its capital through state economic development funds. However, the funds from that prospective source never became available, and the applicant had to seek alternative financing. OST granted the applicant four extensions to allow time to raise the needed capital, but the applicant never obtained the funds necessary to commence operations. FAA expended about 650 staff hours, or about $52,000, on certification activities for this applicant. We could not determine the staff hours, or dollars, that OST analysts spent on certification activities for this applicant because, according to the analysts, they do not maintain records of the staff time spent on individual applicants. In another case, we found that FAA had to suspend its certification efforts during the demonstration phase (phase four)—in which FAA reviews each applicant’s aircraft operations—because an applicant had not acquired its aircraft. Four months after these efforts were suspended, the applicant withdrew from the process because it was unable to obtain the funds to purchase or lease any aircraft. In this case, FAA had spent about 800 staff hours, or about $64,000, on certification activities. Even though OST still requires applicants to present only a plan for raising the necessary capital, OST recently tightened its standards on what is acceptable as evidence of a funding plan and when such evidence must be submitted. According to the Chief of the Air Carrier Fitness Division, all applicants are now required to submit, with their applications, third-party verification that they are working with an established brokerage firm, financial institution, or qualified individuals to raise the necessary capital. Copies of private placement agreements, debt instruments, or other stock offerings must be submitted as part of the application before OST will process it further and issue a show cause order finding the applicant fit. OST officials said that these changes are an attempt to reduce the amount of OST’s and FAA’s resources expended on applicants that do not have their basic financing plans in place when they seek OST’s authority to begin operations. Recognizing that a significant amount of resources is expended on applicants that do not complete the certification process, FAA revised its process in October 1995 to make the process more efficient. FAA officials stated that this action was necessary given the amount of time and resources devoted to applicants that never successfully complete the process and given the need to find a way to reduce the staff resources expended on these applicants. Under FAA’s new process, which incorporates a “gate” system, applicants are required to complete certain steps—at key points in the process—before FAA inspectors will expend additional resources on certification activities. To illustrate, FAA now requires applicants to have applied for OST’s authority during the preapplication phase (phase one) before FAA assigns a certification team to the applicant. During our review, we found that one applicant had proceeded to phase three—the document compliance phase—of FAA’s five-phase process before it submitted an application to OST. Upon reviewing the application, OST analysts questioned the reasonableness of the applicant’s estimated start-up expenses and operating costs for 3 months. As a result of the analysts’ inquiry, the applicant subsequently withdrew its application. However, by this time FAA had expended 1,300 hours of inspectors’ time, incurring about $104,000 in certification costs. FAA’s new process, if properly implemented, should preclude the recurrence of this type of problem. FAA officials told us that in the past, some applicants would wait until the last moment to purchase or lease the aircraft, facilities, and services necessary to conduct the proposed operations. Because some applicants could not raise the needed capital, they delayed completing or never completed the process, resulting in FAA’s expending significant resources on unsuccessful applications. Under FAA’s revised process, when submitting their formal applications in phase two, the applicants must provide proof, such as signed contracts or letters of agreement, that they have purchased or leased the aircraft, facilities, and services needed for the proposed operations before FAA will begin reviewing their operating, maintenance, or training manuals. In addition, by the time the applicants reach the formal application phase, they must have been tentatively found fit by OST and a show cause order must have been issued. Furthermore, FAA now requires applicants to submit completed general operating, maintenance, and training manuals at the time of the formal application. Applicants are encouraged to seek outside assistance in preparing these documents. FAA inspectors told us that in the past it was not uncommon for them to spend a significant amount of time assisting applicants in developing these documents. For example, although OST had determined that one applicant’s key personnel possessed the technical knowledge and skills necessary to provide the proposed services, during a subsequent certification review, FAA inspectors found that the applicant’s personnel did not have the necessary knowledge and skills to develop the required manuals for the proposed operations. Even after obtaining extensive assistance from FAA, the applicant submitted maintenance manuals that included procedures for replacing an aircraft’s propellers, whereas the proposed operations would use only DC-9 jet aircraft. When the applicant did not obtain certification within 1 year of the date of the initial determination of fitness, OST granted the applicant an extension without fully coordinating with FAA. Even with the extension, the applicant could not produce acceptable manuals, and FAA eventually terminated its certification efforts. By this time, however, FAA had expended about 1,800 staff hours, or about $144,000, processing the application. According to DOT officials, in October 1995 OST and FAA established an electronic communications link to better share information about applicants, and OST now routinely contacts FAA before granting any extensions of the 1-year period. Applicants currently pay nominal fees to OST but nothing to FAA to certify their proposed new operations. The fees that applicants currently pay represent less than 1 percent of what it costs the government to conduct certification activities. For example, the 90 applicants that completed OST’s and FAA’s certification processes paid an average fee of only $760 for certification, or less than 1 percent of the government’s average estimated cost of over $150,000 to certify each applicant. OST officials recognize that the existing fees do not cover a substantial portion of the costs of certifying new airlines. The Chief of the Air Carrier Fitness Division estimated that it typically takes an OST analyst about 80 to 100 staff hours, costing about $4,000, to certify a new carrier. We could not determine the actual number of staff hours or dollars OST spent on certification efforts for applicants from January 1990 through July 1995 because, according to OST analysts, they did not maintain such data. Nevertheless, based on the Chief’s estimate of $4,000 per applicant, we calculated that OST spent about $360,000 in certification costs for the 90 airlines that actually began operations, or about $720,000 for the 180 applicants that filed applications during the 5-1/2 years covered by our review. In comparison, OST officials estimated that the 180 applicants paid a total of only $160,000 in fees. The Chief of the Air Carrier Fitness Division recognized that OST may be recouping only a portion of the government’s costs for processing applications through the fees. Nevertheless, the Chief commented that the regulation setting the application fees paid to OST—which includes fees for 50 types of applications, including applications to operate new airlines—has not been reviewed in over 10 years because of the scope of the undertaking and the limited availability of staff. Like OST, FAA could not readily determine the total number of staff hours spent on the applications received since January 1990 because, according to both FAA headquarters officials and field inspectors, they did not have a centralized system for recording this information for the 5-1/2 years covered by our review. Nevertheless, in May 1995 FAA told us that recent certification efforts have required between 1,200 and 2,700 hours of inspectors’ time, for an average of 1,835 hours, to certify a new airline. At the $80 hourly rate for inspectors, the average cost is about $150,000 per certification. We estimate that it cost FAA more than $13.5 million to certify the 90 airlines that actually began operations. In October 1995, FAA estimated the staff time and costs for the applicants that did not complete its process to be about 800 hours, or $64,000 per applicant. Nevertheless, FAA does not charge fees for its certification efforts. We found that, in addition to paying nominal fees for certification, applicants also can make substantial modifications to their proposed operations during the certification process without paying additional fees, even though such actions can significantly increase the government’s costs. For example, during the certification process one applicant changed the type of aircraft it planned to use. This action caused FAA inspectors to essentially restart their efforts, resulting in additional reviews and increased costs. Title 31, section 9701, of the U.S. Code gives federal agencies the authority to charge fees for services or benefits provided to specific beneficiaries. The Office of Management and Budget’s Circular A-25 implements this authority by prescribing guidelines for imposing charges on users of the government’s services. The general policy is that a reasonable charge should be made to each identifiable recipient of a government service, privilege, authority, or certificate from which a special benefit is derived. Section 9701 states that such charges are to be based on the (1) cost of the service to the government, (2) value of the service to the recipient, and (3) public policy or interest served. In addition, the statute establishes a policy that such services should be as self-sustaining as possible. Although FAA does not currently charge a fee for its certification efforts, DOT officials commented that a portion of the certification costs is recouped from ticket and fuel taxes paid by the operating airlines and deposited into the Airport and Airway Trust Fund. Even so, applicants do not pay into the fund until they begin operations; therefore, applicants that never begin operations never contribute to the fund. As mentioned earlier, 80 of the 180 applicants that filed applications with OST between January 1990 and July 1995 (1) were tentatively found fit but had yet to begin or had never begun operations or (2) withdrew their applications or had them dismissed or denied and thus had never contributed to the fund. OST and FAA officials recognized that the existing fees were insufficient to cover certification costs but have not reviewed the appropriateness of the current fee structures. Under legislation introduced in the Congress in September 1995, FAA would be allowed to charge fees to support various aviation services. According to the Deputy Director of Flight Standards Service, FAA plans to examine all services requiring certificates and the existing fee structures to determine the extent to which the government’s costs have been or should be recouped. A date for completing this action has yet to be determined. DOT’s certification processes have resulted in 90 new carriers’ entering the airline industry over the past 5-1/2 years. These new carriers have benefited the traveling public by increasing competition among airlines and, in turn, reducing airfares. However, about half of the applicants that applied to operate new airlines did not complete the processes, primarily because they could not obtain sufficient financial resources. In some instances, FAA expended a significant amount of resources on costly certification activities. Although OST and FAA recently revised their certification processes to reduce the amount of resources spent on unsuccessful applications, it is too early to determine how the revisions will work in practice and to what extent they will reduce unnecessary expenditures. The fees that applicants pay for certification allow the government to recoup only a small portion—less than 1 percent—of its costs for those applicants that complete DOT’s processes. Although the government recoups some of its certification costs through ticket and fuel taxes, these funds are collected only from applicants that successfully begin and sustain their operations. Applicants that never begin operations do not pay such taxes. Requiring applicants to pay a greater share of the certification costs could generate revenue that could help defray these costs—a particularly important outcome during this period of declining federal budgets. We recognize that the Congress will ultimately be involved in any decision to establish fees for various aviation support services. Given the current reduction in federal resources, we recommend that the Secretary of Transportation reevaluate the appropriateness of the Office of the Secretary’s increasing its fees and FAA’s establishing fees for services to certify new airlines, taking into consideration the government’s costs, the value of the services to the applicant, and the public policy or interest served. We provided a draft of this report to DOT officials for their review and comment. We met with Department officials, including OST’s Chief of the Air Carrier Fitness Division and FAA’s Deputy Director of Flight Standards Service, to discuss their comments. The draft report contained proposed recommendations to DOT to improve OST’s and FAA’s certification processes and to reevaluate the existing fees for certification services. These officials generally agreed with the findings and conclusions in the draft report. In commenting, the officials provided a number of clarifications and updates that have been incorporated into the report as appropriate. Most significantly, the report has been updated to recognize a number of actions that OST and FAA have taken during the course of our review to improve their processes for certifying new airlines. Specifically, (1) FAA has revised its certification process to require that applicants complete certain steps before it will expend additional resources, (2) OST now requires all applicants to submit, with their applications, third-party verification that they are working with an established brokerage firm, financial institution, or qualified individuals to raise the necessary capital, and (3) OST and FAA have established an electronic communications link to better share information about applicants. As a result of these actions, we have deleted our proposed recommendation to improve OST’s and FAA’s certification processes because, if properly implemented, these actions should mitigate several of the concerns we identified and improve the efficiency of the process for certifying new airlines. DOT officials generally agreed with our remaining recommendation, recognizing that the existing fees do not cover the government’s certification costs. But DOT has taken no action to date to reevaluate the existing fees. In addition, while legislation introduced in the Congress in September 1995 would allow FAA to charge fees for various aviation services, this legislation has not yet been enacted. Therefore, we continue to believe that DOT should review the appropriateness of its fees for certifying new airlines, either as a separate issue or as part of any broader effort to examine FAA’s fees for the services provided to the aviation industry. We conducted our review from October 1994 through December 1995 in accordance with generally accepted government auditing standards. A detailed discussion of our objectives, scope, and methodology appears in appendix I. Unless you publicly announce its contents earlier, we plan no further distribution of this report until 10 days after the date of this letter. At that time, we will send copies to the Secretary of Transportation; the Administrator, FAA; the Director, Office of Management and Budget; and other interested parties. We will also make copies available on request. Please call me at (202) 512-2834 if you have any questions about this report. Major contributors to this report are listed in appendix II. In August 1994, Representative James L. Oberstar, the then Chairman of the Subcommittee on Aviation, House Committee on Public Works and Transportation (now the Committee on Transportation and Infrastructure), asked us to examine the Department of Transportation’s (DOT) efforts to ensure that new airlines meet federal economic and safety standards before commencing flight operations. On the basis of subsequent discussions with the Subcommittee’s office, this report addresses three questions: (1) How many applicants have applied for and received certification to begin new airlines since 1990? (2) What processes does DOT have in place to certify new airlines? and (3) How much does it cost to certify new airlines and how are these costs distributed between the government and the applicants? To address the first question, we obtained from DOT’s Office of the Secretary (OST) a list of all the applicants that applied for new airline certification between January 1990 and July 1995. The list identified 180 applicants and gave the status of their applications as of July 1995. We also asked the Federal Aviation Administration (FAA) to verify the status of the applications. To address the second question, we reviewed pertinent federal statutes and DOT’s regulations to identify which DOT units are responsible for performing certification activities. We also reviewed OST’s criteria, procedures, and other pertinent documents outlining the requirements for determining an applicant’s fitness. We discussed these issues with the five analysts in OST’s Air Carrier Fitness Division who are responsible for assessing whether applicants have the necessary skills and resources to operate a new airline. We also selected a judgmental sample of 40 of the 180 applications filed with OST from January 1990 through July 1995 for detailed review to validate how OST’s process was implemented. We selected these 40 applicants because they represented a broad mix of categories of applicants and proposed operations. The 40 applicants selected included 15 of the 57 operating airlines, 7 of the 33 airlines that began but ceased operations, 7 of the 33 airlines that were found tentatively fit but had yet to begin operations or had never operated, and 11 of the 47 applicants that had withdrawn their applications or had them dismissed or denied. We did not review any of the 10 pending applications. In addition, we reviewed FAA’s criteria, procedures, and other documents used to certify new airlines and discussed them with a selected sample of 37 FAA inspectors working in the flight standards district offices we visited. We also conducted detailed reviews of a judgmental sample of files on 16 of the 57 airlines that began operations after January 1990 in order to validate how FAA’s certification process was implemented. We selected the 16 airlines because they represented a mix of carriers, including different types of airlines, fleet sizes, aircraft, and proposed operations. While examining OST’s and FAA’s criteria, documentation, and procedures for certifying new airlines, we looked for possible deficiencies in the certification processes. Additionally, we interviewed analysts in OST’s Air Carrier Fitness Division and FAA inspectors to obtain their views on what deficiencies, if any, existed in the processes and whether any efforts were under way to correct the known problems. To address the third question, we interviewed analysts in the Air Carrier Fitness Division and FAA headquarters officials and field inspectors and reviewed OST and FAA documents to determine the number of staff hours and associated costs required to certify a new airline. We also discussed with the officials how the costs are distributed between the government and applicants. In addition, we reviewed DOT’s regulations and the Office of Management and Budget’s guidance on charging fees for services provided by the government and the collection of fees by OST and FAA to determine the extent to which the government’s certification costs are or should be recouped. We performed our work at the DOT’s Air Carrier Fitness Division within OST and at FAA headquarters in Washington, D.C. We also performed work at three of the nine FAA regional offices (Eastern, Southern, and Western Pacific) and six of FAA’s 91 flight standard district offices (Reno, Nevada; Scottsdale, Arizona; Chantilly, Virginia; and Orlando, Ft. Lauderdale, and Miami, Florida). We selected the regional and flight standards district offices to obtain geographical diversity and because these locations were responsible for certification efforts for many of the applications that FAA received between January 1990 and July 1995. We conducted our review between October 1994 and December 1995 in accordance with generally accepted government auditing standards. David K. Hooper The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Department of Transportation's (DOT) processes for certifying the initial operations of new airlines, focusing on the: (1) number of applicants that applied for and received authorization to begin new airlines since 1990; and (2) cost to certify new airlines and how the cost is distributed between the government and the applicants. GAO found that: (1) from January 1990 to July 1995, 90 of 180 applicants were authorized to begin new airline operations; (2) 33 of these 90 airlines ceased operations prior to July 1995; (3) the 90 remaining applicants were not authorized to begin airline operations because they lacked the financial resources needed to perform proposed services or the DOT Office of the Secretary (OST) had not approved their applications; (4) factors that determine whether applicants receive OST and Federal Aviation Administration (FAA) certification include the completeness of the initial application and applicant's ability to meet operation and financial criteria; (5) OST has tightened its financial standards by requiring applicants to provide third-party verification of their financial plans; (6) FAA has revised its certification process to prevent applicants lacking sufficient financial resources from proceeding into the airline certification process; (7) OST and FAA have established an electronic communication link to share information about airline applicants, but it is unknown how much this will reduce DOT resource waste; (8) applicants pay less than $1,000 to apply for airline certification, while the government pays up to $150,000 to process each application; (9) a portion of the government's cost of certifying new airlines is recouped from ticket and fuel taxes once the applicants begin operations; and (10) OST and FAA must examine the appropriateness of certification fees, since certification costs are not recovered under the fee structure.
Under the provisions of the National Aeronautics and Space Act of 1958, NASA is authorized to acquire aircraft. Since its creation, NASA has operated a small fleet of aircraft, primarily to provide passenger transportation. According to the 2004 General Services Administration’s Federal Aviation Interactive Reporting System, NASA is one of six civilian agencies that reported operating aircraft primarily for the purpose of passenger transportation. In fiscal year 2003, NASA reported owning and operating a fleet of 85 aircraft valued at $362 million, including aircraft dedicated to program support, research and development, and passenger transportation. NASA reported owning 53 aircraft that were used to provide support to programs such as the Space Shuttle, International Space Station, and Astronaut programs. The majority of these aircraft are located at the Johnson Space Center. For example, shuttle trainers are one type of program support aircraft. These aircraft have been modified to duplicate the shuttle’s approach profile, cockpit cues, and handling qualities so that astronaut pilots can see and feel simulated approaches and landings before attempting an actual shuttle landing. NASA reports owning 25 aircraft to support its research and development efforts. These aircraft have been modified to support the agency’s mission to conduct aeronautical research at varying altitudes and atmospheric conditions. For example, NASA operates a modified Learjet 23 as a research platform for the Airborne Terrestrial Land Application Scanner. NASA owns seven aircraft that are used to provide passenger transportation. In fiscal year 2004, NASA reported its seven passenger aircraft carried nearly 10,000 passengers and logged nearly 4 million passenger miles. Figure 1 provides an overview of the aircraft owned and operated by NASA to provide passenger transportation and their location. In addition, NASA obtained passenger transportation services through the Economy Act, a cooperative agreement, and a fractional ownership contract with DOD, FAA, and Flexjet, respectively. DOD–-Under provisions of the Economy Act, NASA acquired additional passenger aircraft services from DOD using Gulfstream V aircraft. DOD provided documentation for three NASA flights of more than 60 flight hours during fiscal years 2003 and 2004. DOD billed NASA approximately $290,000 for these services. FAA—During fiscal years 2003 and 2004, NASA and FAA entered into a shared-use cooperative agreement for four aircraft, three of which were owned by FAA and the other by NASA. All four aircraft were housed at Reagan National Airport in Washington, D.C. In exchange for contributing its one aircraft and $1.1 million annually during 2003 and 2004, NASA received the right to 450 total flight hours per year on any of the four aircraft. Under this agreement, NASA could schedule flights on these aircraft with a minimum of 24 hours advance notice. FAA agreed to pay routine maintenance, fuel, and personnel costs associated with the NASA aircraft. NASA was also allowed to purchase additional hours, beyond the agreed 450 hours, at the hourly rate for the specific aircraft used. During the 2-year period, NASA utilized the four aircraft in this arrangement for approximately 1,600 flight hours for a reported cost of $4.5 million, which included charges for the original 900-hour agreement plus charges for additional hours. Flexjet–-In October 2000, conferees on the NASA fiscal year 2001 appropriation bills directed NASA to prepare a plan that considers whether fractional ownership of passenger aircraft may be beneficial. In July 2002, pursuant to the conferee guidance, NASA awarded a contract with Flexjet for a 2-year demonstration program to determine the viability of using fractional ownership to meet NASA’s administrative air transportation requirements. Under the 2-year demonstration NASA reported cost of approximately $3.5 million in return for a total of approximately 800 flight hours of passenger transportation services. OMB Circular No. A-126 (Revised), Improving the Management and Use of Government Aircraft (May 22, 1992), prescribes policies for executive agencies to follow in acquiring, managing, using, accounting for the costs of, and disposing of government aircraft. This circular applies to all government-owned, leased, chartered, and rental aircraft and related services operated by executive agencies, except for aircraft while in use by or in support of the President or Vice President. OMB Circular No. A-126, section 6, a., provides that the number and size of aircraft acquired and retained by an agency and the capacity of those aircraft to carry passengers and cargo should not exceed the level necessary to meet the agency’s mission requirements. OMB Circular No. A-126, section 5, b., defines mission requirements to include activities related to the transport of troops and/or equipment, training, evacuation (including medical evacuation), intelligence and counter narcotics activities, search and rescue, transportation of prisoners, use of defense attaché-controlled aircraft, and aeronautical research and space and science applications. OMB Circular No. A-126, section 5, b. explicitly states that mission requirements do not include official travel to give speeches, attend conferences or meetings, or make routine site visits. In addition to the policies prescribed by OMB Circular No. A-126, agencies must also follow the guidance of OMB Circular No. A-76 before purchasing, leasing, or otherwise acquiring aircraft and related services, to assure that these services cannot be obtained from and operated by the private sector more cost effectively. Further, agencies must review periodically the continuing need for all of their aircraft and the cost effectiveness of their aircraft operations in accordance with the requirements of OMB Circular No. A-76 and report the results of these reviews to GSA and OMB. Agencies are to report any excess aircraft and release all aircraft that are not fully justified by these reviews. Once an agency has justified that it has a valid mission requirement for owning aircraft, OMB Circular No. A-126, section 8, a., permits agencies to use aircraft for official, but nonmission-required travel when: no commercial airline or aircraft service is reasonably available (i.e., able to meet the traveler’s departure and/or arrival requirements within a 24-hour period, unless the traveler demonstrates that extraordinary circumstances require a shorter period) to fulfill effectively the agency requirement; or actual cost of using a government aircraft is not more than the cost of using commercial airlines. OMB Circular No. A-126, section 14, also provides that agencies maintain systems that will enable them to: (1) justify the cost-effective use of government aircraft in lieu of commercially available air transportation services, and the use of one government aircraft in lieu of another; (2) recover the costs of operating government aircraft when appropriate; (3) determine the cost effectiveness of various aspects of their aircraft programs; and (4) conduct the cost comparisons required by OMB Circular No. A-76 to justify in-house operation of government aircraft versus procurement of commercially available passenger aircraft services. Attachment B of OMB Circular No. A-126 also provides that agency systems must accumulate and summarize costs into the standard passenger aircraft program cost elements. For example, standard cost elements would include items such as fixed and variable crew costs, maintenance costs, fuel costs, and overhaul and repair costs. In addition, the General Services Administration (GSA) established governmentwide policy on the operation of aircraft by the federal government—including policies for managing the acquisition, use, and disposal of aircraft that the agencies own or hire. GSA publishes its regulatory policies in the Code of Federal Regulations (C.F.R.). GSA also publishes a number of other guides and manuals to help agencies manage the acquisition, use, and disposal of aircraft. These publications include the U.S. Government Aircraft Cost Accounting Guide, which contains information on how agencies should account for aircraft costs, and the Fleet Modernization Planning Guide, which provides guidance on developing cost-effective fleet replacement plans. NASA’s Inspector General (IG) issued two reports on NASA’s passenger aircraft, one in 1995 and another in 1999. Both NASA IG reports were critical of NASA’s management of these aircraft, identifying weaknesses in NASA’s accounting and justification for its passenger aircraft. In its 1995 report, the NASA IG reported that NASA passenger aircraft cost an estimated $5.8 million more annually when compared with commercial airline transportation. The IG recommended actions with respect to NASA’s (1) compliance with many of the provisions of OMB Circular Nos. A-126 and A-76 (including fully considering commercial airlines as an alternative to NASA operations of passenger aircraft services), (2) use of outdated and incomplete cost data to justify trips and approval of some trips without adequate justifications, and (3) use of passenger aircraft that were more expensive to operate than using commercial airline services. The IG’s 1999 report focused on one passenger aircraft located at NASA’s Marshall Space Flight Center and estimated that the cost of commercial airlines in comparison with the NASA-owned aircraft was $2.9 million less over a 5-year period. Similar to the 1995 report, the 1999 report was also critical of NASA’s implementation of guidance in OMB Circular Nos. A-126 and A-76. Further, the report noted that the agency had not effectively addressed actions recommended in the 1995 report concerning the need to more fully and effectively evaluate the use of commercial airlines. The IG recommended that NASA management dispose of the passenger aircraft at Marshall and instead use commercial airlines to satisfy Marshall’s air transportation requirements. NASA management disagreed with the findings of both IG reports, stating that commercial airlines cannot effectively meet all the mission requirements and the capability of NASA aircraft outweighs the marginal costs savings of total reliance on commercial airlines. An analysis of NASA’s reported costs for its passenger aircraft services shows they are an estimated five times more costly than commercial airline coach tickets. For purposes of this aggregate comparative cost analysis, we considered available NASA reported data on costs applicable to its passenger aircraft services—both variable and fixed costs--in comparison with commercial airline service costs. Specifically, to assess the aggregate costs associated with NASA-owned and -chartered passenger aircraft, we accumulated available NASA annual report passenger aircraft services cost data for fiscal years 2003 and 2004, validated to the extent feasible with industry standards, and compared these cost estimates with total estimated commercial airline costs based on the cost of an average coach ticket. We determined that NASA’s reported costs for the aircraft it owned or chartered were on the order of about $20 million more costly over a 2-year period than if NASA had used commercial airline services to carry out the same number of business trips. Specifically, estimated costs associated with NASA’s passenger aircraft operations during fiscal years 2003 and 2004 were almost $25 million, while we estimated the cost of commercial coach tickets for the same number of travelers would have been approximately $5 million—about $20 million more to provide NASA passenger aircraft services than if commercial airlines were used to provide passenger transportation over the 2-year period. Table 1 summarizes our analysis of commercial and NASA passenger transportation costs by types of NASA-owned or -chartered aircraft. We identified the number of passengers from NASA’s aircraft request forms and NASA annual performance reports. We then multiplied the identified number of passengers by our estimate of NASA’s average commercial coach round-trip ticket cost. We determined the average coach round-trip ticket cost of approximately $426 by analyzing all airfares purchased with NASA’s travel cards in fiscal years 2003 and 2004. Specifically, we identified approximately $49,776,000 in round-trip airfare tickets in NASA travel card purchases during fiscal years 2003 and 2004, and divided this dollar amount by the number of tickets purchased (116,865) to determine an average ticket cost of approximately $426. Finally, we compiled an estimate of NASA’s passenger aircraft service costs, which included costs related to personnel, maintenance, and fuel, from annual cost reports and budget information provided by NASA. This calculation of the difference between the relative cost of NASA- provided passenger transportation services and commercial airline costs does not consider per diem, in-transit salary and benefits, and other factors associated with using NASA passenger services. NASA officials believe that a comparison of NASA and commercial airline passenger services should include estimates of such cost savings shown in its passenger aircraft request forms. We recognize that, to the extent that all passengers on the aircraft had a valid purpose for travel, there may be personnel- related cost savings associated with use of NASA’s passenger aircraft services; however, it was not feasible for us to reliably identify such costs using independent (non-NASA) sources. Further, as discussed in a subsequent section of this report, we have concerns about the reliability of some of NASA’s cost and associated savings data captured in its flight request documentation. In addition, we also identified questionable savings attributed to non-official travelers. However, NASA’s cost estimates do serve to provide indicators of general ranges of costs that may be avoided by using NASA passenger aircraft services. Using available NASA documentation of costs that would have been incurred if commercial airlines were used would increase the estimated commercial airline costs to approximately $11 million, and reduce the difference between NASA’s passenger airline services and commercial airlines to about $13 million over the 2-year period. Specifically, available NASA passenger aircraft services flight request documentation generally included estimated costs associated with not only airline tickets, but also estimates for salary and benefit costs associated with lost work time, per diem expenses, and rental car costs associated with the additional time required if commercial airlines were used to provide passenger transportation. Consequently, even when available NASA estimates of costs associated with commercial airline transportation services were included, a comparison with the costs of its passenger air transportation services shows that they are nearly 2.3 times more costly than commercial airlines. Our cost analysis, based primarily on data included in NASA’s annual reporting on its aircraft operations, did not include data on all relevant types of costs attributable to NASA’s passenger aircraft services. Consequently, the full cost of continued operation of NASA’s passenger aircraft fleet in comparison with commercial airline services would be substantially more than the $20 million estimate for fiscal years 2003 and 2004. Specifically, the following types of costs were not accounted for in NASA’s various annual reports on its passenger aircraft services. NASA’ s current inventory of seven passenger aircraft is valued at more than $33 million, including two Gulfstream II aircraft purchased in 2001 for a total of about $13.9 million. An allocable portion of the acquisition and associated capital improvements to these assets is part of NASA’s annual cost of operating its passenger aircraft services. In addition, these costs may increase in the near future. A July 2004 fleet plan prepared for NASA recommended upgrading and expanding its passenger aircraft fleet as soon as possible with an initial investment of $75 million. Further, NASA is considering an investment of an estimated $1.5 million in a noise restriction package for its Gulfstream III aircraft during fiscal year 2008, making the total investment that NASA is currently considering about $77 million. NASA aircraft received hangar and maintenance services even though they were housed on government property. Industry data on hangar costs show that they total about 5 percent of total aircraft operation costs. Although the government operates under a self-insurance policy, the liability associated with operation of passenger aircraft is a cost factor that must be considered given the significant number of passenger flights taken using NASA-owned aircraft over the last 2 years. Industry estimates show liability insurance costs represent approximately 2 percent of total aircraft operating costs. Not only were NASA’s passenger aircraft services significantly more costly than commercial airlines, but NASA’s continued ownership of aircraft to provide air transportation supporting routine NASA business operations was not in accordance with OMB guidance. OMB guidance (1) limits the number and size of aircraft acquired and owned by an agency to carry passengers to the level necessary to meet mission requirements, including, for example, use of aircraft for prisoner transportation, intelligence and counter narcotics activities, and aeronautical research; and (2) explicitly prohibits owning aircraft to support routine business functions, including providing air transportation to attend meetings, conferences, and routine site visits. In contrast, NASA’s implementing guidance, while generally consistent with OMB guidance, was interpreted to allow acquiring and retaining aircraft for any official travel, regardless of the mission-required nature of the travel. Our analysis of available flight data showed that an overwhelming majority (86 percent) of the flights taken during fiscal years 2003 and 2004 using NASA passenger aircraft services were to support routine business operations, including attending meetings, conferences, and site visits. Excluding flights related to the Columbia accident, routine business flights accounted for about 97 percent of NASA passenger aircraft flights. Further, although OMB guidance required NASA to periodically prepare studies to determine if continued ownership of passenger aircraft was justified, the agency’s studies were either incomplete or did not consider commercial airline service alternatives. NASA implementation is not consistent with OMB policy on aircraft ownership. OMB Circular No. A-126, the governing federal policy guidance in this area, provides that agencies should own aircraft only to the extent needed to meet mission requirements, such as troop transportation, prisoner transportation, intelligence and counter narcotics activities, and aeronautical research. OMB’s policy guidance further provides that agencies should not own aircraft to provide transportation to meetings, routine site visits, and speeches. However, NASA implementing guidance, while generally consistent with OMB policy, does not clearly and uniformly address the federal policy limiting aircraft ownership to those assets needed to meet mission requirements. NASA Procedural Requirements (NPR), section 3.3.2, reiterates the OMB policy prohibition on using passenger aircraft to provide transportation supporting routine business operations as a basis for continuing to own aircraft. However, in the following sections (sections 3.3.2.1 through 3.3.2.5), NASA’s guidance provides that mission-required use of aircraft includes support for activities “directly related to approved NASA programs and projects.” These elaborating sections were mistakenly operationally determined to mean that all travel using NASA passenger aircraft services was directly related to NASA programs or projects, regardless of whether they were of a routine, nonemergency nature. The NASA IG’s 1999 report on NASA’s passenger aircraft at its Marshall Space Flight Center also questioned whether that aircraft’s use was consistent with the OMB limitation on owning aircraft only for mission- required purposes. The audit report recommended that NASA change the definition of mission requirements in its policy guidance to conform to the definition of mission requirement stated in OMB guidance. However, in its response to the audit report, NASA management stated that there was no difference between its guidance and the OMB guidance and therefore it would not take any action to clarify its policy guidance. Our analysis of available documentation on flight purposes shows that NASA’s implementation of its guidance related to using aircraft in direct program or project support has resulted in owning aircraft to support meetings, conferences, and speeches in direct conflict with OMB’s policy prohibition in this area. In effect, NASA circumvented the OMB policy on restricting aircraft ownership to those needed to carry out mission requirements by operationally determining that nearly all travel using passenger aircraft services was directly related to NASA programs or projects. Our analysis of NASA passenger air transportation services for fiscal years 2003 and 2004 showed that about 86 percent of the flights were taken to support the types of routine business operations that are expressly prohibited by OMB’s guidance for aircraft ownership. Specifically, we categorized the documented flight purpose listed on 1,188 NASA aircraft request forms for NASA passenger aircraft usage during fiscal years 2003 and 2004 into 10 categories in order to determine the frequency of different uses for NASA’s passenger aircraft services. In conducting our analysis, we categorized any flight as mission required if it could be linked to OMB’s definition of mission requirements, regardless of its apparent, non-emergency nature. As a result, some flights we categorized as mission required may have actually been routine in nature. For example, in response to the 1999 NASA IG report, NASA management stated that launch support flights were required to transport NASA emergency response teams to launch sites within hours to help resolve unexpected launch-related problems. However, most launch support flights during our audit period were scheduled more than 24 hours before the flight departure date. Of the 19 flights we identified as directly supporting NASA launches, only 7 were scheduled less than 2 days prior to the flight, and overall the flights were scheduled an average of approximately 3 days prior to departure. In one example, on July 29, 2003, Kennedy Space Center requested the use of a NASA passenger aircraft to fly from Florida to California as launch support for the joint Canadian Space Agency/NASA Scientific Satellite Atmospheric Chemistry Experiment Mission. The flight was requested on July 29, 2003, 12 days before the flight’s August 10, 2003, departure and 14 days before the August 12, 2003, launch. We categorized this flight as being related to launch support. However, the fact that the flight was scheduled nearly 2 weeks in advance of the flight departure brings into question whether the flight was time sensitive and indicates that commercial coach service could have been used. Figure 2 presents the results of our analysis and categorization of NASA’s use of owned and chartered aircraft over fiscal years 2003 and 2004 into 10 categories. As shown in figure 2, available data showed that about 14 percent of the flights taken using NASA passenger aircraft had a stated purpose that appeared to comply with OMB Circular No. A-126’s definition of mission required. As shown in figure 3, excluding flights related to the Columbia accident investigation, only 3 percent of NASA’s passenger aircraft activity was related to mission-required travel. Table 2 highlights examples of flights in which NASA passenger aircraft services were used to support non mission-critical NASA business operations that are not consistent with OMB’s definition of mission- required use necessary to justify continued passenger aircraft ownership. The results of our interviews with passengers on such flights showed that, while use of the NASA aircraft was more convenient, better accommodated busy NASA SES-level staff schedules, and was more productive, the trip purposes could have been accomplished through travel on regularly scheduled commercial airlines. OMB Circular No. A-126 policy guidance instructs agencies to periodically conduct OMB Circular No. A-76 cost comparisons to determine whether commercial activities should be conducted using government resources or commercial sources. NASA’s A-76 studies conducted to date have asserted that because not all flight purposes could be achieved using commercial airlines, commercial airlines are not a viable alternative and were not considered in any of the studies. However, as discussed previously, our analysis of NASA passenger aircraft flights taken during fiscal years 2003 and 2004 as well as our discussions with passengers on those flights disclosed that the vast majority of the flights could have been accomplished using commercial airlines. As a result, NASA’s A-76 studies inappropriately excluded potentially more cost-effective commercial airline services from consideration. Little supporting documentation is available for four of the seven aircraft in NASA’s passenger aircraft fleet that were acquired decades ago. Consequently, it was difficult to determine how these aircraft acquisitions were justified and if there was a mission requirement justifying aircraft ownership at that time. The five NASA A-76 studies on NASA-owned aircraft did not include a comparison of NASA’s passenger aircraft costs with commercial airline costs. NASA’s studies compared its aircraft ownership costs against costs of NASA leasing aircraft to provide passenger transportation services because “commercial airlines cannot effectively meet all mission requirements.” For example, NASA’s March 2004 A-76 study was based on the assumption that NASA aircraft would be required to support mission requirements of an estimated 400-450 flight hours a year--essentially the total number of flight hours flown by that NASA center’s passenger aircraft during 2003 and 2004. While NASA may continue to require access to some mission-required passenger aircraft services for which commercial airlines would not be a viable alternative, assuming that all prior flight hours were mission required without first examining the purpose for these flights is not consistent with the OMB guidance. In addition to NASA-owned aircraft, as discussed previously, NASA obtained passenger aircraft services through interagency agreements with DOD and FAA, and a fractional ownership pilot demonstration contract with Flexjet. These alternative approaches offer ready access to passenger aircraft without the fixed cost investment and the need to fund aircraft maintenance, pilot training, and other costs associated with aircraft ownership. For example, under NASA’s contract with Flexjet, NASA had guaranteed availability to passenger air transportation services. Specifically, the contract with Flexjet allowed NASA to schedule flights with a minimum of 8 hours advance notice. According to a NASA contractor’s December 2004 study, such arrangements to obtain passenger transportation services provide a cost-effective alternative to agency ownership of aircraft when demand is highly variable or less than 150 to 200 hours a year. Such flexible arrangements could provide NASA with quick-turnaround access to air passenger transportation services, and appear to have the ability to have met NASA’s limited mission-required needs during the period of our review. Further, NASA has not performed any A-76 studies for three of its aircraft that were used as passenger aircraft. NASA purchased two Gulfstream II aircraft in 2001 as contingency backups to, and eventual replacements for, its existing shuttle trainer aircraft fleet. However, since purchasing the aircraft, NASA has been using these aircraft as part of its passenger aircraft services fleet. Subsequent changes in NASA’s long-term strategy for space flight now show that shuttles will not be used after about 2010. As a result, the continuing mission-required need to retain these aircraft is questionable. In its 1995 and 1999 reports, the NASA IG expressed concern over NASA’s exclusion of commercial airline transportation from its A-76 studies. In both reports, the IG reported that the A-76 studies NASA management performed with respect to its passenger aircraft improperly excluded a cost comparison with commercial airlines. While the IG recommended that NASA program offices responsible for passenger aircraft operations perform A-76 studies to include consideration of accomplishing air travel needs using commercial airlines, NASA management contended that because of isolated travel destinations and extremely short advance notice, commercial airlines could not meet its travel needs. However, our analysis of available documentation supporting flights taken during fiscal years 2003 and 2004 shows that most were requested more than 24 hours in advance of flight departure and most NASA centers are located within an hour’s drive of commercial airports. NASA’s oversight and management controls over its passenger aircraft operations were ineffective. NASA lacks the systems or procedures to accumulate and use agencywide usage and cost data needed to provide the transparency and accountability necessary to effectively support day-to- day management of its passenger aircraft service operations. Specifically, NASA did not Maintain agencywide records on the purposes for which its passenger aircraft are used and their costs. Such data are critical to (1) determining whether usage is consistent with OMB guidance limiting aircraft ownership to those agencies with mission requirement needs, and (2) maintaining visibility and accountability for the full costs associated with its passenger aircraft operations. Lacking such full cost visibility and passenger accountability, NASA’s passenger aircraft services are sometimes viewed as a “free” resource by NASA project and program officials. Correctly justify the cost effectiveness of individual flights. These justifications were flawed in that they relied on (1) inaccurate cost data and (2) other unsupported factors used in the cost-justification calculation. Have processes in place to obtain reimbursements from nonofficial passengers flying on NASA-owned or -chartered aircraft. This may include NASA employee spouses and relatives, contractors, or other federal agency personnel. NASA systems or procedures in place to accumulate detailed usage and cost data related to its passenger aircraft services were flawed. Other than data compiled once a year to meet external reporting requirements, neither NASA management nor congressional oversight officials had agencywide aircraft usage and cost data needed to provide the transparency and accountability needed to make informed decisions on continued ownership of passenger aircraft. Costs associated with ownership and operation of NASA’s passenger aircraft services were usually included in center overhead accounts that were allocated to programs based on the number of personnel assigned to programs without regard to the extent to which program personnel actually used NASA passenger aircraft services. Therefore, it is not surprising that some NASA personnel expressed the view that use of NASA-owned or -chartered aircraft is a “free” resource to them in that they did not have visibility or accountability over associated costs as part of their program or project budget execution reporting. Because NASA lacked a system for routinely collecting agencywide usage and cost data, it could not provide us with the complete and accurate agencywide information on aircraft usage and cost that we requested as part of this audit. Although each center that possesses and manages passenger aircraft is required to maintain a flight justification and manifest for each trip, the flight usage data contained in these documents are not compiled or analyzed on an agencywide basis to support decisions related to mission-required needs. Specifically, NASA data on the purposes and costs of its passenger aircraft services during fiscal years 2003 and 2004 were contained in paper flight justifications and manifests maintained at six different locations. We created a database of descriptive cost and usage data for approximately 1,200 flights using NASA-owned or -chartered aircraft for which sufficiently complete data were available. Although, as mentioned previously, we obtained evidence that NASA also utilized at least two additional program support aircraft to meet its passenger air transportation needs, the limited data on use and costs associated with flights using these aircraft did not allow us to include data on these flights in our database. Further, data on passenger aircraft services for about 200 flights at one center was missing most of the data elements on the flight request justification forms, including flight purpose and cost-justification calculations. Without agencywide data on flight purposes and costs related to its passenger aircraft services, NASA managers and Congress lack critical information they need to make key aircraft ownership decisions. In addition to the limited agencywide usage and cost data, we also found that the data provided by NASA, although certified by NASA management as complete and accurate, were not always complete or accurate. Our comparison of NASA-supplied data on flights taken in fiscal years 2003 and 2004 with FAA data showed that (1) data on 97 passenger flights were not included in the aircraft usage data NASA certified as complete; and (2) as discussed in a subsequent section, NASA-supplied data did not always include all legs of trips taken using NASA passenger aircraft. After our identification of the flights, while not complete in all cases, NASA was able to provide some form of supporting documentation showing these flights occurred, including proof of authorization, approval, or a determination of cost effectiveness. Examples of some of the flights not included in the data NASA officials certified as complete are summarized in table 3. Further, we identified a breakdown in controls over flight data record integrity at one center. Specifically, when we inquired about provided documents that did not appear to be originals, NASA officials told us that flight requests and approvals related to a 1-year period covering parts of fiscal years 2003 and 2004 were lost and recreated after flights took place. NASA officials stated that the loss of these important aircraft usage data was apparently not discovered until after our initial request for documentation as part of this audit. NASA officials did not inform us that documents were recreated until after we questioned inconsistencies in the documentation. “Although everything about the flight was very positive – convenience, shorter trip time, professional service, etc. – the cost was considerably more than flying a commercial airline. ... As much as I enjoyed the door to door service, if the travel costs had been coming out of my project I would have chosen to fly commercial.” This statement summarizes how NASA decision making on aircraft operations is distorted by the lack of complete data on the cost of using this resource. Second, NASA does not classify costs related to passenger aircraft services in its annual financial and budget reports to Congress as a cost of transportation of persons. In annual reports, one specific object expense class, object class 21, is designed to capture and disclose agencies’ costs for transporting passengers. Instead, the cost of NASA passenger aircraft services are included in overhead cost accounts, which understates the true cost of transporting NASA passengers. As discussed previously, our analysis of available estimates of NASA’s aggregate costs associated with its passenger aircraft services in comparison with commercial airline ticket costs showed that NASA’s passenger aircraft services cost about $20 million more than commercial airlines. In addition, NASA’s individual flight cost justification process for its passenger aircraft services was flawed. Our analysis of cost-comparison documentation supporting passenger aircraft flights taken during fiscal years 2003 and 2004 revealed critical flaws, including variable cost data that were 6 years out of date and unsupported cost factors. Available NASA documentation supporting NASA’s individual flight justifications for flights taken during 2003 and 2004 showed a total estimated savings of $6 million over the 2-year period. However, if these justifications had included up-to-date NASA variable costs and excluded unsupported cost factors attributed to the additional time required to use commercial airline flights, most flights would not have been approved because they would have been more costly than commercial air travel. Policy guidance in OMB Circular A-126 provides an agency may use aircraft on a flight-by-flight approval basis for routine business purposes to the extent that a comparison between the agency’s specified variable costs and the costs of commercial travel shows the proposed flight is cost effective. Specifically, OMB Circular A-126, Attachment A, provides that costs of commercial travel must be compared with the variable costs of operating the agencies’ passenger aircraft and that proposed flights using agencies’ passenger aircraft for routine business purposes should only be approved if they result in a cost savings to the government. Further, OMB guidance provides that variable cost estimates used in flight-by-flight cost justification calculations are to be updated annually. This policy on flight- by-flight variable cost justification does not replace the agencies’ need to first establish a valid mission requirement for owning aircraft, and overall cost effectiveness. As discussed previously, our analysis of flight purposes showed that about seven of every eight flights were for routine business travel. Consistent with OMB policy guidance, NASA regulations provide that individual cost justifications comparing estimated commercial airline travel costs with estimated variable costs associated with using NASA- owned or -chartered aircraft should be prepared prior to all passenger aircraft flights. Figure 4 provides an overview of the methodology NASA used to compare NASA and commercial costs for its flight-by-flight justifications. vs. 2.5 (multiplier) Several NASA centers had not updated the variable costs used in their flight-by-flight cost-comparison calculations for over 6 years. Such out-of- date variable costs significantly understated NASA’s flight-by-flight costs. For example, at two centers, the $964 variable cost per flight hour used for flight-by-flight justifications during fiscal years 2003 and 2004 was over 6 years out of date. According to NASA aircraft management officials, this hourly rate was last adjusted in 1998. At one center, a recent recalculation, done in 2005 pursuant to our audit, increased the center’s variable cost rate from $964 to $1828 an hour, almost a 90 percent increase. Further, even this 90 percent higher rate may understate NASA’s actual variable costs. For example, the aircraft manufacturer for the aircraft in use at that center reported a direct cost per flight hour rate of approximately $3,000 a flight hour, including estimated fuel costs alone in excess of $1,300 an hour. NASA variable costs were also understated at one center because the flight- by-flight justifications included only variable cost estimates for one round trip when the aircraft actually made two round trips to meet passengers’ transportation requirements. Our analysis of FAA flight information and flight documentation obtained from the center showed that the flight request data we were provided included estimates related to only two of four flight legs flown to complete 14 flight requests over the 2-year period of our review. For example, on August 6, 2003, NASA’s passenger aircraft transported passengers from Houston, Texas, to Pueblo, Colorado, and then returned without passengers to Houston the same day. Three days later, pilots flew an empty aircraft to Pueblo to pick up passengers and return them to Houston. Center officials stated that additional round trips were necessary to return the flight crew to their home station where they could be more productive performing other duties. Center officials stated that they did not include the two extra flight legs in their calculations of the variable costs associated with NASA’s passenger transportation because they classified these legs as crew training flights. Nonetheless, the costs incurred by these additional flights should be considered among the costs related to NASA’s passenger air transportation services. In addition to understatements of NASA’s variable costs, NASA’s flight-by- flight cost comparisons were also flawed in that they increased the cost associated with flying commercially by using a largely unsupported multiplier of 2.5. NASA could not provide any specific NASA-related empirical evidence to validate use of the multiplier in its flight-by-flight justification process. NASA used this multiplier in addition to factors for time and salary costs accounted for in its cost-justification calculation. The use of a multiplier to increase the value of an employee’s time beyond his or her salary and fringe benefits is not expressly provided as part of OMB’s Circular No. A-126 guidance. Further, cognizant OMB officials told us that it was not their intent that agencies use any such multiplier (beyond the salary and fringe benefits associated with any time savings) in determining whether proposed flights were cost effective. They also stated they were not aware of any agencies using such a multiplier in their flight justification calculations. While NASA officials informed us that they had been using this multiplier for a number of years and that they believed it was a conservative factor, they did not provide any documentation demonstrating the appropriateness of the multiplier as it applies specifically to the experiences of NASA personnel who used these aircraft. Consequently, lacking such documentation, NASA’s use of a 2.5 multiplier improperly overstates the costs of commercial alternatives. The overall effect of understating NASA costs and overstating commercial costs in NASA’s flight-by-flight justifications was that NASA incorrectly approved individual flights as cost effective. For example, NASA justified one round trip from Kennedy Space Center, FL to Burbank, CA as cost effective, calculating a savings of $4,800. NASA calculated a cost savings for the flight because it used a 1998-based variable cost factor for the NASA plane of $964 per hour and also multiplied the travelers’ salary costs savings by the unsupported 2.5 multiplier. If the variable cost was updated to NASA’s 2004 estimate of $2,528 per hour for that aircraft and the unsupported multiplier was removed, the estimated variable costs associated with the proposed NASA passenger aircraft flight would have exceeded estimated commercial airline costs by $17,408. Further, even after incorporating NASA’s unsupported estimate that employee fringe benefits increase employee direct salary costs by an additional 50 percent, the NASA aircraft variable costs for this flight would still have exceeded commercial costs by about $16,000. NASA lacks procedures to consistently and effectively identify and recover the applicable costs of operating government aircraft when nonofficial passengers fly on NASA-owned or -chartered aircraft. As a result, nonofficial travelers were provided free transportation using NASA’s passenger aircraft services. However, because of the lack of procedures and documentation in place concerning the determination of the official status of travelers, we could not determine, and more importantly NASA could not determine, if any of the travelers should have, but did not, reimburse the government for the cost of their transportation. According to OMB Circular No. A-126, travelers flying on a space-available basis on government aircraft for a purpose other than the conduct of official agency business generally must reimburse the government for the full coach fare. Reimbursement for travel at the government rate for the cost of coach tickets would have covered about one fifth of NASA’s reported costs associated with the use of NASA’s passenger aircraft services. However, NASA has not implemented agencywide policies and procedures to ensure that such travelers reimburse the government for the corresponding coach cost. Processes in place at each of the six centers to obtain reimbursements ranged from none at all to ad hoc procedures that essentially relied on individual travelers to identify and submit payments. For example, at one center, NASA’s procedures consisted of notifying NASA travelers of the need to obtain reimbursement from nonofficial travelers flying on the aircraft, but did not provide for any follow-up to monitor and collect the requisite amount from non-NASA travelers. For example, between September 2, 2004, and September 6, 2004, several Kennedy Space Center employees and their families and contractors and their families used the center’s Gulfstream II aircraft to fly to Washington, D.C. in advance of Hurricane Francis. According to center officials, the center was required to evacuate the aircraft from the path of the approaching hurricane, and a decision was made to transport the contractor pilots, mechanics, and their families over 800 miles north to Washington. After flying the contractors and their families out of the area, the aircraft then returned to pick up other center management personnel, personnel associated with the aircraft management, and their families and flew them to Washington. A NASA official stated that at least one of the passengers on these flights should have reimbursed the government for a portion of the cost of their transportation. However, the official did not know if such reimbursement was obtained. NASA officials at two other centers stated that they have not obtained reimbursements or they had no documentation showing the extent to which reimbursements from nonofficial passengers on NASA flights were identified and obtained. In addition, we identified over 100 other travelers that NASA classified as dependents flying on NASA passenger aircraft that may have been nonofficial travelers. These passengers may have been required to reimburse NASA for a portion of the costs of their transportation. As NASA strives to carry out its new vision for the future of the agency, using its resources as efficiently as possible will be a growing fiscal challenge. Operating what is essentially its own small passenger airline service, while potentially providing certain benefits to the agency and its employees, costs an estimated five times more than if commercial airlines were used to provide these services. Further, NASA’s ownership of aircraft to support essentially routine business operations is in direct conflict with OMB’s policy prohibition on such uses and passenger interviews which showed that in almost all cases, the travel could have been accomplished using commercial airlines. NASA management has disagreed with, and taken only limited action with respect to similar prior audit recommendations in this area and insufficient management attention and agencywide oversight has allowed NASA to continue this costly program for decades. The cumulative effect has been failures in effectively justifying the extent to which such passenger aircraft services are needed to address critical, time-sensitive mission requirements, as well as effectively determining the extent to which these services could be accomplished without incurring the substantial, fixed operation and maintenance costs associated with aircraft ownership. Immediate actions to dispose of all aircraft not needed to address mission requirements and adoption of more flexible, less costly alternatives to satisfy future mission requirements would best position NASA to meet its stewardship responsibilities for taxpayer funds it receives, and better enable it to meet its current fiscal challenges. Congress should consider whether legislation is necessary to ensure that (1) NASA disposes of all of its passenger aircraft not used in accordance with OMB’s explicit policy prohibition against owning aircraft to support travel to routine site visits, meetings, speeches, and conferences; and (2) funding for future NASA passenger aircraft purchases and operations is restricted to those necessary to meet mission requirements consistent with OMB guidance. To the extent that Congress determines that NASA should continue to retain aircraft or passenger aircraft charter services to provide passenger transportation, we recommend that the Administrator of NASA take the following six actions: Establish policies and procedures for accumulating and reporting on its passenger aircraft services to provide complete and accurate agencywide cost and utilization data to support oversight and decision making on operating and retaining such aircraft services. Clarify policies and procedures applicable to aircraft acquisition and retention to limit the number and type of aircraft owned and chartered for passenger transportation to those necessary to meet the “mission- required” criteria in OMB guidance. Periodically assess the extent to which NASA has a continuing need to own aircraft to provide passenger transportation in support of mission requirements in accordance with OMB guidance. Maximize the use of flexible, cost-effective arrangements to meet mission-required passenger air transportation service needs in lieu of aircraft ownership. Revise existing policies and procedures used to determine if individual flights are justified to include use of up-to-date variable costs and limit commercial cost estimates to include airfare, in-transit salaries and fringe benefits, and other costs directly related to reasonable estimates of delays incurred in meeting commercial airline flight schedules in accordance with OMB and GSA guidance. Establish agencywide policies and procedures for identifying and recovering applicable costs associated with nonofficial personnel traveling using NASA passenger aircraft services on a reimbursable basis. In its written comments, the NASA Administrator concurred with our recommendations and set out several actions to address identified deficiencies. Specifically, he said NASA would review its policies and procedures related to aircraft management to ensure they are aligned with OMB requirements and conduct a comprehensive study of the agency’s passenger aircraft operations to be completed by October 31, 2005. These actions are consistent with the intent of our recommendations to NASA and if carried out fully and effectively will help address the deficiencies we found. If NASA's study referred to above is carried out effectively and fully considers the various matters discussed in this report, it should provide the Congress valuable information for deciding whether legislation may be needed on this matter. NASA’s comments on a draft of this report are reprinted in appendix II. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the report date. At that time, we will send copies of the report to interested congressional committees. We will also send copies of this report to the Office of Management and Budget and the General Services Administration. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions regarding this report, please contact me at (202) 512-7455 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. To assess reasonableness of costs, use, and agency oversight and management of the National Aeronautics and Space Administration’s (NASA) passenger aircraft services, we met with officials of NASA’s Office of Infrastructure and Administration Aircraft Management Office and appropriate officials at the Johnson Space Center, Houston, Texas; Marshall Space Flight Center, Huntsville, Alabama; Kennedy Space Center, Cape Canaveral, Florida; Wallops Flight Facility, Wallops Island, Virginia; and Dryden Flight Research Center, Edwards, California. We reviewed aircraft utilization and management reports prepared by NASA and its contractor and aircraft operations budget/cost information, including annual Aviation Financial Reports for fiscal years 2003 and 2004, Annual Aviation Report: Aircraft Performance for fiscal years 2003 and 2004, and NASA’s 2004 Mission Management Aircraft Fleet Plan. At each center, we observed and assessed the process for managing passenger aircraft services and scheduling and justifying costs for individual flights. We also reviewed available documentation supporting the various cost-justification factors and multipliers NASA used to estimate the variable costs of using its passenger aircraft as well as the alternative costs of using commercial airline transportation. As part of our effort, we collected and compiled available flight-by-flight NASA passenger aircraft cost and usage data from NASA mission management aircraft request forms. These forms provided such descriptive data as dates and purpose for travel, itinerary, passengers, levels of approval, and cost justification for aircraft use. We asked for all documentation maintained at NASA centers for flights flown using NASA’s passenger aircraft services during fiscal years 2003 and 2004. We selected the most recently completed 2-year period because NASA’s regulations specify retaining source documents related to passenger aircraft usage for at least 2 years. While incomplete, we ultimately obtained some type of documentation indicating that NASA passenger aircraft services during this 2-year period included about 1,500 flights. However, because of the limited amount of supporting documentation available for several hundred flights, we included only 1,188 flights in our analysis. For example, we could not use any of the approximate 200 flights from the Dryden Flight Research Center in our analysis because few of the requested documents included all the required usage and cost data necessary for such an analysis. To independently verify the reliability and completeness of individual flight source documentation maintained at NASA centers, we compared the NASA-provided flight information with information on NASA aircraft flights maintained by the Federal Aviation Administration’s (FAA) Enhanced Traffic Management System and reconciled differences. Further, while not included in our analysis, we obtained documentation showing that 2 of NASA’s aircraft classified as program support aircraft were also used to provide passenger transportation, we did not attempt to determine if any of NASA’s other program support or research aircraft may have also been used to provide passenger transportation as part of this audit. Also, we did not review the effectiveness of safety or maintenance programs related to NASA’s passenger aircraft services. To analyze the relative costs of NASA’s passenger aircraft services compared to commercial airline costs, we relied primarily on available NASA cost data from NASA Aviation Financial Reports. We validated this data wherever feasible with comparable independent data sources including industry data. For example, we contacted the manufacturers of both types of passenger aircraft used by NASA to validate that the cost estimates used in our analysis were similar to the manufacturers’ cost metrics for operating those aircraft. We used fiscal year 2003 and 2004 cost data in annual Aviation Financial Reports reported by each center that operated one or more of NASA’s passenger aircraft and the costs NASA incurred for chartering passenger aircraft from other government agencies or contractors. At one center where reported cost data for passenger aircraft in the Aviation Financial Report were combined with data for other agency aircraft, we used annual budget data that were limited to passenger aircraft operations. At NASA headquarters, we used annual cost data provided by NASA because their annual Aviation Financial Reports did not contain costs associated with NASA’s use of Federal Aviation Administration aircraft. Finally, costs from NASA’s Report on the Fractional Aircraft Demonstration Program were used to determine the total cost of Flexjet flights. We used NASA Mission Management Aircraft Request forms to determine the estimated cost for flights taken on Department of Defense (DOD) aircraft. We then compared reported costs of NASA aircraft operations and aircraft charter costs with our estimates of travel costs that NASA would have incurred had the passengers who flew on NASA’s aircraft during our 2-year test period used commercial airline transportation instead. To estimate the commercial transportation costs of NASA employees who traveled using NASA’s passenger aircraft, we used the average commercial airline round-trip fare of $426 for all flights flown by NASA employees during this same time period as reported in a database of travel card transactions for NASA provided by NASA’s contractor, Bank of America. This average commercial round-trip air fare estimate is intended to approximate NASA’s passenger transportation costs if it had used commercial airline services instead of its own services. As such, it may reflect amounts that in some cases would exceed NASA’s actual commercial costs. For example, to the extent to which unofficial travelers were included in estimates of passengers, commercial costs would be overstated. Conversely, in other cases our estimate may have underestimated NASA’s costs. For example, costs may have been understated to the extent that such travel involved passenger aircraft services to remote locations or locations with limited commercial air service. To determine the number of travelers who flew on NASA-owned and -chartered aircraft during the 2-year period, we used the number of passengers identified on individual hard-copy flight manifest documentation NASA provided to us. During the course of our review, we became aware of additional flights flown at some centers for which we were not provided flight manifest documentation. However, we were unable to obtain and analyze documentation for these additional flights in time to complete our analysis. To the extent that the number of passengers on flights for which individual flight documentation was not provided to us, the estimate of commercial airfare costs is understated. At the Dryden Flight Research Center, where individual hard-copy flight documentation did not contain complete information, we used the number of passengers the center reported to NASA headquarters for inclusion in annual aircraft performance reports. We did not use the numbers of passengers reported for all centers because the centers reported their passenger counts inconsistently and we were unable to validate them. Although the number of passengers reported on individual flight manifests often included passengers who flew only one way or on one or more legs of the trip, we counted these partial-trip passengers as having flown round-trip for purposes of estimating the commercial costs of passengers flown on NASA’s aircraft. Consequently, in this respect, our estimated savings are likely to be understated in that including these partial-trip passengers in the total number of passengers overstated our estimate of airfare costs that NASA would have incurred had the passengers traveled on commercial airlines. Conversely, our estimated savings may be overstated because our estimated commercial travel costs did not include additional lodging and other incidental costs that travelers would periodically incur and salary costs for additional lost work time. To estimate the lost work time associated with commercial airline travel, including salary and benefit costs, per diem, rental cars, commercial tickets, and other costs, we utilized cost estimates included in NASA individual flight request forms. For the 1,188 flights for which we received data, we used NASA’s estimates for salary costs multiplied by lost work hours, number of travelers, and NASA’s benefit factor of .5. Because accounting for fringe benefit costs was recognized in OMB guidance, while unsupported, we used NASA’s estimated fringe benefits factor of .5 to increase passengers’ salary costs. In addition to salary costs, we also included available NASA estimates for additional per diem, commercial tickets, rental cars, and other travel costs associated with lost work time from using commercial airline services. For one aircraft, we did not receive any flight justification cost estimates. Instead, the location operating the aircraft had developed standard calculations for the average commercial cost for their two common flight patterns. We averaged the estimated commercial cost for the two flight patterns to determine the average cost savings per traveler for the aircraft. We then multiplied the commercial cost by the number of travelers NASA reported for the aircraft during fiscal years 2003 and 2004 to determine the total commercial cost of transportation for travelers on the aircraft. To assess whether NASA aircraft were operated and retained in accordance with applicable governmentwide guidance, we primarily reviewed the Office of Management and Budget (OMB) Circular No. A-126, Improving the Management and Use of Government Aircraft; and Circular No. A-76 (Revised), Performance of Commercial Activities. We also reviewed applicable governmentwide guidance in OMB Circular No. A-11, Preparation, Submission and Execution of the Budget Part 7: Planning, Budgeting, Acquisition and Management of Capital Assets (Revised June 2005); General Services Administration’s (GSA) Federal Property Management Regulations, 41 C.F.R. Subtitle C; and Federal Travel Regulations, 41 C.F.R. Subtitle F. We also reviewed NASA’s implementing publications, NASA Policy Directive (NPD) 7900.4B, NASA Aircraft Operations Management (April 2004); NASA Policy Regulation (NPR) 7900.3A, Aircraft Operations Management (April 1999); and center- specific implementing instructions. We held discussions regarding these policies and procedures with officials of OMB’s Office of Federal Procurement Policy, Transportation/GSA Branch, and Science and Space Branch; GSA’s Office of Government-wide Policy; and NASA’s Office of General Counsel and NASA Center and program managers. At each center, while we observed the process for managing aircraft operations and scheduling and justifying individual flights, we interviewed managers and program officials to discuss the importance to which they assessed the need and justification for owning/leasing passenger aircraft. We analyzed the purpose cited by NASA for individual flights flown during our 2-year test period to determine whether NASA’s stated purpose complied with criteria established in OMB and GSA guidance. We interviewed agency personnel who requested, approved, and/or were passengers on approximately 80 flights during our 2-year test period to ensure that we understood the purpose for the flights and the basis for utilizing NASA’s aircraft. We did not assess the adequacy of safety or maintenance programs related to NASA’s passenger aircraft. Further, we did not attempt to determine the validity or appropriateness of travel using NASA’s passenger aircraft, nor did we assess if the type and number of personnel on the NASA passenger aircraft were appropriate given the stated flight purposes. To assess the effectiveness of NASA’s oversight and management of its passenger aircraft operations, we held discussions with appropriate aircraft management officials at NASA headquarters and centers operating passenger aircraft. We also identified and assessed (1) NASA’s implementing policies and procedures with respect to OMB and GSA policy guidance, (2) the process used to approve and document passenger aircraft utilization, (3) associated aircraft management reports, (4) other recent assessments and studies done with respect to NASA passenger aircraft services, and (5) the extent to which accurate, current agencywide data were available to agency managers for day-to-day decision making on passenger aircraft usage and costs. We briefed NASA officials on the details of our audit, including findings and their implications. On June 28, 2005, we requested comments on a draft on this report. We received comments on July 28, 2005, and have summarized those comments in the Agency Comments and Our Evaluation section of this report. NASA’s comments are reprinted in appendix II. We conducted our work from November 2004 through June 2005 in accordance with U.S. generally accepted government auditing standards and quality standards for investigations as set forth by the President’s Council on Integrity and Efficiency. In addition to the contact named above, Mario L. Artesiano, James D. Berry, Fannie M. Bivins, Latasha L. Brown, Matthew S. Brown, Harold J. Brumm, Carey L. Downs, Richard T. Cambosos, Francine M. Delvechio, Francis L. Dymond, Dennis B. Fauber, Geoffrey B. Frank, Diane G. Handley, Alison A. Heafitz, Christine A. Hodakievic, Jason M. Kelly, Jonathan T. Meyer, George J. Ogilvie, James W. Pittrizzi, Kristen M. Plungas, John J. Ryan, Sidney H. Schwartz, Joan K. Vogel, and Leonard E. Zapata also made key contributions.
Since its creation, the National Aeronautics and Space Administration (NASA) has operated passenger aircraft services. These operations have been questioned in several prior audit reports. GAO was asked to perform a series of audits of NASA's controls to prevent fraud, waste, and abuse of taxpayer dollars. In this audit, GAO assessed (1) the relative cost of NASA passenger aircraft services in comparison with commercial costs, (2) whether NASA aircraft services were retained and operated in accordance with governmentwide guidance, and (3) the effectiveness of NASA's oversight and management of this program. NASA-owned and -chartered passenger aircraft services provide a perquisite to employees, but cost taxpayers an estimated five times more than flying on commercial airlines. While the majority of NASA air travel is on commercial airlines, NASA employees took at least 1,188 flights using NASA passenger aircraft services during fiscal years 2003 and 2004. Use of NASA passenger aircraft services can save time, provide more flexibility to meet senior executives' schedules, and provide other less tangible and quantifiable benefits. However, GAO's analysis of available reported data related to NASA passenger aircraft services during fiscal years 2003 and 2004 showed NASA reported costs were nearly $25 million compared with estimated commercial airline coach transportation costs of about $5 million. Further, this relative cost comparison, based on available NASA reported costs, did not take into account all applicable types of costs associated with its passenger aircraft services, including, for example, depreciation associated with the estimated $14 million NASA paid in 2001 to acquire several aircraft used for passenger transportation. Consequently, NASA's passenger air transportation services are much more costly than indicated by available data. Further, NASA is currently considering additional expenditures of about $77 million to upgrade and expand its existing passenger fleet. NASA's ownership of aircraft used to provide passenger transportation conflicts with federal policy allowing agencies to own aircraft only as needed to meet specified mission requirements, such as prisoner transportation and aeronautical research. GAO's analysis of NASA passenger aircraft flights for fiscal years 2003 and 2004 showed that an estimated 86 percent--about seven out of every eight flights--were taken to support routine business operations specifically prohibited by federal policy regarding aircraft ownership, including routine site visits, meetings, speeches, and conferences. Further, agencywide oversight and management of its passenger aircraft services was not effective. NASA's ability to make informed decisions on continued ownership of its passenger aircraft fleet and on flight-by-flight justifications was impaired by the lack of reliable agencywide data on aircraft costs and other weak management oversight practices.
Many participants along the entire drug supply chain are affected by shortages. A typical drug supply chain involves a drug manufacturer selling a drug to a wholesale distributor, which then sells the drug to a hospital or pharmacy. (See fig. 1.) Shortages of drugs can result in a variety of problems that directly affect the care patients receive. For example, recent research on the effects of drug shortages identified an increase in adverse outcomes among pediatric cancer patients treated with an alternative drug. Further, in some cases, drug shortages can even contribute to additional health problems. For example, one stakeholder said that recent shortages of drugs that supply an essential nutrient, like calcium, could lead to nutrient deficiencies among patients. FDA is responsible for overseeing the safety and effectiveness of drugs marketed in the United States. Within FDA, the Center for Drug Evaluation and Research (CDER) manages these responsibilities. FDA’s approval is required before new drugs and generic drugs can be marketed for sale. To obtain FDA’s approval for a new drug, sponsors must submit a new drug application (NDA) containing data on the safety and effectiveness of the drug as determined through clinical trials and other research for review by CDER’s Office of New Drugs. Sponsors of generic drugs may obtain FDA approval by submitting an abbreviated new drug application (ANDA) to the agency for review by CDER’s Office of Generic Drugs. The ANDA must contain data showing, among other things, that the generic drug is bioequivalent to, or performs in the same manner as, a drug approved through the NDA process. After obtaining FDA’s approval, drug companies that want to change any part of their original application—such as changes to product manufacturing location or process, type or source of active ingredients, or the product’s labeling—must generally submit an application supplement to notify FDA of the change and, if the change has a substantial potential to have an adverse effect on the product, obtain FDA’s approval. CGMP regulations provide a framework for a manufacturer to follow to produce safe, pure, and high-quality drugs. See 21 C.F.R. pts. 210-211. action. In some cases, FDA may exercise its regulatory discretion and assess whether the risks of either taking a certain enforcement or other action or refraining from taking action will outweigh the benefits, such as when an action may cause or exacerbate a drug shortage. For example, if a manufacturing deficiency is identified, such as overfilled vials or the presence of contaminants, the manufacturer should take appropriate corrective and preventive actions, or FDA may issue a warning letter or take an enforcement action to require the manufacturer to do so. Similarly, FDA may request manufacturers of drugs whose labeling is not consistent with the labeling approved by FDA to correct such labeling, or it may take an regulatory action to require the manufacturers to do so. In 1999, FDA established the CDER Drug Shortage Program—now known as DSS—to coordinate issues related to drug shortages.DSS determines that a shortage is in effect or a potential shortage is pending, it contacts manufacturers of the drug to collect up to date information on inventory of the drug, demand for the drug, and manufacturing schedules. DSS may also coordinate its response with several offices including the Office of Generic Drugs, Office of New Drug Quality Assessment, and CDER’s Office of Compliance—the office responsible for minimizing consumer exposure to unsafe, ineffective, and poor quality drugs. DSS may also work with FDA’s Office of Regulatory Affairs—the office within FDA that oversees imports, inspections, and enforcement policy—and the manufacturer to help resolve any underlying problem a manufacturer is facing. If the shortage is of a controlled Once substance, FDA may work with the Drug Enforcement Administration (DEA) on any issues related to quotas for the production of the drug. When FDA is informed of a potential shortage in advance, it may take steps to prevent the shortage, such as providing assistance to address manufacturing problems. manufacturer’s proposed approach to responding to quality concerns. In addition, FDA can expedite inspections of manufacturing establishments to facilitate the marketing of an alternative to a drug in shortage or can expedite inspections once remediation to address quality problems has been completed. FDA officials said that they take steps to address shortages of both medically necessary drugs and non-medically necessary drugs, though they give priority to shortages of medically necessary drugs. additional sources of a drug in shortage, or supplements to ANDAs or NDAs, to provide additional capacity for production of an already approved drug. While there are a number of steps FDA can take to address a shortage, FDA cannot require manufacturers to start producing or continue to produce a drug. It also cannot require manufacturers to maintain or introduce manufacturing redundancies in their establishments to provide them with increased flexibility to respond to shortages. On October 31, 2011, the President issued an Executive Order that directed FDA to use its authority to encourage manufacturers to report drug supply disruptions earlier, to expedite regulatory review, when possible, to prevent or mitigate drug shortages, and to communicate to the Department of Justice any findings by FDA that shortages have led market participants to stockpile shortage drugs or sell them at exorbitant prices. In November 2011, we found that weaknesses in FDA’s ability to respond resulted in a predominately reactive approach to addressing shortages, although this was partially due to the fact that, at the time our report was issued, FDA did not have the authority to require manufacturers to notify Our previous report contained the agency of most impending shortages. several recommendations for FDA, including assessing the resources that FDA allocates to its DSS; developing an information system to manage data on shortages; ensuring that FDA’s strategic plan articulates goals and priorities for maintaining the availability of drugs; and developing results-oriented performance metrics related to FDA’s response to drug shortages. FDA outlined actions it planned to take which were consistent with these recommendations. Subsequently, the enactment of FDASIA in July 2012 resulted in several new requirements for FDA and manufacturers intended to address drug shortages. (See table 1 for a summary of these provisions.) The number of drug shortages remains high, with almost half of critical shortages involving generic sterile injectable drugs. Provider association representatives identified challenges in responding to drug shortages without adversely affecting patient care. The number of drug shortages reported each year remains high, although there was a decrease in 2012 relative to the record number of new shortages reported in 2011. We found that from 2007 through 2011, the number of drug shortages reported increased each year, with a record 255 shortages reported in 2011. However, in 2012, for the first time since 2006, there was a decrease in the number of drug shortages reported. Specifically, in 2012, 195 shortages were reported, which was a 24 percent decrease from 2011. As of June 30, 2013, 73 shortages had been reported in 2013. (See fig. 2.) Over half (55 percent, or 622) of the 1,132 shortages reported since January 1, 2007, were for drugs that were in shortage more than once. Specifically, 240 drugs were in shortage on multiple occasions between January 1, 2007, and June 30, 2013, representing 622 individual shortages. For shortages reported since January 1, 2007, the duration of the shortages varied, ranging from 1 day to over 5 years. The majority of shortages—68 percent—lasted 1 year or less.the drug shortages over this period was 340 days—slightly less than a year. (See fig. 3.) Provider association representatives told us that a number of the challenges that we reported in 2011 were still relevant for their members, including delays in or rationing of care, difficulties finding alternative drugs, risk associated with medication errors, higher costs, reduced time for patient care, and hoarding or stockpiling of drugs in shortage. During a shortage, providers may have to cancel or delay procedures, which can have detrimental health effects on patients. Providers may also have to ration care by prioritizing the patients who have a greater need for the drug. For example, provider association representatives said that if a drug is used in patients across age groups, but is essential for the care of newborns, a hospital may institute a policy that the drug can only be administered to newborns and will no longer be administered to adults. In addition, representatives from the provider associations noted that identifying effective, alternative drugs for those in shortage can be difficult. In some cases, it may not be possible to find a suitable alternative. For example, representatives from one of the associations we spoke to said that emergency service providers have reported significant difficulties finding alternative medications for stopping seizures and are concerned with the viability of alternative therapies in certain emergency situations. A representative from another association said that when effective alternatives are identified and located, medication errors may increase because the dosage of the alternative drug may differ from what providers are accustomed to using. Drug shortages may result in higher drug costs as well as greater risks to patients. To obtain drugs in short supply, providers may turn to suppliers they do not typically use, including authorized alternative suppliers, compounding pharmacies, or gray market suppliers—those not authorized by the manufacturer to sell the drug—who typically obtain small quantities of a drug that is in short supply and offer it for purchase at an inflated price. Drugs from alternative suppliers can cost significantly more, and, in the case of compounding pharmacies, and gray market suppliers, may pose risks to patients. An outbreak of fungal meningitis in 2012 linked to contaminated compounded drugs—resulting in over 60 deaths and hundreds of people becoming ill—has led to questions about the safety and quality of compounded drugs. Because the origin of a gray market drug may be unknown, there is no assurance that it was stored and transported appropriately. As a result, patients who receive treatment with such drugs may experience adverse events or receive inadequate or inappropriate treatment. (See app. III for a description of steps federal agencies have taken to respond to gray market activities.) Managing drug shortages also can detract from patient care. Providers may develop and institute polices for distributing drugs in short supply to patients, which some provider association representatives said may take time away from caring for patients. They may also need to become familiar with new products and different dosages, which may increase the risk for medication errors and take time away from patient care. Representatives from one provider association said that hospital pharmacists might need to devote more time than usual to work with the physician prescribing a drug that is in shortage to determine an appropriate therapeutic alternative. In some instances, providers have hired full-time staff whose positions are entirely devoted to managing drug shortages. One provider association representative said a well-known hospital system has eight full-time employees who only work on addressing drug shortages. However, a few representatives noted that smaller providers may not have the resources to hire full-time employees and existing staff may have to take on additional responsibilities in order to respond to shortages. Finally, the mere threat of a potential shortage can cause problems for patients and providers. While some provider association representatives reported that the lack of advance notice of a shortage hinders their ability to respond, most of the provider organizations we spoke with expressed concern that reports of an impending shortage can lead to the hoarding or stockpiling of drugs making it more difficult to access the drugs. Quality problems resulting in supply disruptions coupled with constrained manufacturing capacity were frequently cited as the immediate causes of recent drug shortages. However, we also identified multiple potential underlying causes of shortages, all of which were related to the economics of the generic sterile injectable drug market. The most frequently cited immediate cause for a drug shortage was that a manufacturer halted or slowed production after a quality problem was identified, resulting in a supply disruption. These supply disruptions were linked to, among other things, such problems as bacterial contamination or the presence of glass or metal particles in drug vials. Representatives from all eight manufacturers that we interviewed said that quality problems have contributed to recent shortages. Our analysis of FDA data shows that 40 percent of the shortages reported between January 1, 2011, and June 30, 2013, resulted from quality concerns, such as particulate matter or plant maintenance issues. In addition, most of the studies we reviewed (16 of 20) reported that concerns about product quality that led to supply disruptions have been the immediate cause of most shortages. For example, one analysis found that quality problems were the cause linked to the majority of shortages of sterile injectable drugs. Another study found that the immediate cause of 46 percent of all drug shortages in 2011 was a quality problem. According to this study, the specific issues contributing to recent shortages have ranged from an inability to ensure the sterility of products to the identification of particulate matter in products. According to another study, some of the largest manufacturers of sterile injectable drugs have had quality problems that they chose to address by temporarily closing or renovating their establishments, thereby reducing or temporarily ceasing manufacturing of multiple drugs and leading to supply disruptions. Some of the temporary plant closures were proactively undertaken by the manufacturers themselves, while others were undertaken as part of their response to a warning letter from FDA. Such plant closures to address quality problems with certain drugs or production lines can result in shortages of other drugs manufactured at these establishments, including those not associated with quality problems. One study noted that many shortages that are classified as being caused by delays and capacity issues are technically caused by supply disruptions related to quality. Our analysis of FDA data indicates that manufacturing delays or capacity issues accounted for 30 percent of the shortages reported between January 1, 2011, and June 30, 2013. FDA officials told us that delays or capacity issues that triggered shortages typically involved temporary shutdowns or slowdowns undertaken to perform maintenance or, in many recent cases, for remediation efforts, which then caused supply disruptions. Although quality problems were a frequently cited issue, there was not complete agreement as to whether quality problems were truly the trigger for the supply disruptions that cause shortages. Specifically, one study concluded that FDA has applied CGMPs more rigorously in its inspections of manufacturing establishments, resulting in a greater number of quality problems being identified and thus leading to manufacturing supply disruptions that then triggered shortages. Another study suggested that an increase in FDA inspections of injectable drug manufacturing establishments without evidence of an increase in quality problems has contributed to shortages of generic sterile injectable drugs. In addition, one manufacturer representative noted that FDA investigators throughout the country may differ in their interpretations of CGMPs, which the representative said creates uncertainty about whether a manufacturer’s current processes will be found to be in compliance during an FDA inspection. Therefore, from this manufacturer’s perspective, FDA’s compliance actions have been the primary cause of shortages with quality problems being a secondary cause. A second manufacturer representative said that though quality problems have contributed to recent shortages, from the manufacturer’s perspective, quality standards have also been raised. However, one study countered the claim that FDA’s enforcement has changed by stating that manufacturers often identify quality problems and it is their discoveries that trigger FDA inspections in the first place, rather than an increase in agency scrutiny. This study also noted that CGMPs, which provide a framework for a manufacturer to produce safe, pure, and high-quality drugs, have not changed in recent years. One manufacturer representative concurred that the CGMPs themselves had not changed. However, this representative noted that as manufacturing technology advances, the expectations of FDA investigators as to what represents quality manufacturing may advance as well. For example, as new manufacturing equipment becomes available, FDA investigators may expect manufacturers to install the new equipment, even if using older equipment will result in drugs of the same quality. Although not as prominently cited in the literature or the FDA data as quality problems, we identified a number of additional factors that can cause supply disruptions and ultimately result in shortages. Permanent product discontinuations: Permanent product discontinuations were another immediate cause of shortages. Our analysis of FDA data shows that 12 percent of the shortages reported between January 1, 2011, and June 30, 2013, resulted from product discontinuations. According to several studies, the generic drug market is extremely concentrated, with few manufacturers producing each drug. For example, one study found that most generic sterile injectable drugs are made by three or fewer manufacturers. As a result, the discontinuation of a drug by a single manufacturer can have a significant impact on drug availability. Two studies noted that older generic drugs may be discontinued in favor of producing newer drugs that are more profitable or Three manufacturer representatives said that that have more demand.they take a number of factors into account when determining whether to discontinue manufacturing a drug. The first manufacturer representative said that in addition to price, they will also account for factors such as the medical necessity and importance of the drug when making decisions about what drugs to manufacture. The second manufacturer representative said that they do not discontinue products if they know that doing so would cause a shortage or exacerbate an existing one, although the same representative also said that products with low sales or profitability may be de-emphasized in favor of producing drugs with greater demand. The third manufacturer representative also noted that it is likely that a lower-margin product would be discontinued rather than a higher-margin product. Two manufacturer representatives said that if a drug is already in shortage, they will try to continue to manufacture it, even at low or negative profit margins, to ensure that the drug remains in the market. Unavailability of raw materials or components: The unavailability of raw materials, such as an active pharmaceutical ingredient (API), and non-API components, such as vials, also contributes to shortages. The majority of the studies we reviewed cited the unavailability of raw materials or non-API components as a cause of shortages and two reported that there is often only one API source for a given drug. This dependency on a sole API source can lead to shortages if availability becomes a problem, regardless of the number of manufacturers of a particular product. Most of the manufacturers’ representatives agreed that the unavailability of API has caused some shortages, although some representatives said that it was a relatively small percentage of them. For example, a representative from one manufacturer said that the 2011 tsunami in Japan disrupted the API supply for one of its products and led to a shortage. Although the manufacturer ultimately identified another source for the needed material, FDA had to approve the manufacturer’s new source, which was time-consuming. In addition, two manufacturer representatives mentioned that issues with non-API components had contributed to shortages. Our analysis of FDA data shows that 9 percent of the shortages reported between January 1, 2011, and June 30, 2013, resulted from the unavailability of APIs or non-API components. Loss of a manufacturing site or site change: Our analysis of FDA data shows that 3 percent of the shortages reported between January 1, 2011, and June 30, 2013, were due to either the loss of a manufacturing site or site change. One manufacturer representative said that while manufacturers have experienced disruptions due to natural disasters, this has been rare. In the literature, half of the studies (10 of 20) mentioned natural disasters, such as floods or hurricanes, as a cause of shortages. Loss of, or damage to, a manufacturing site was typically given as an example of the supply disruption resulting from the disaster. Increased demand: In addition to events that result in changes in supply, we found that shortages can also be triggered by changes in demand. Our analysis of FDA data shows that 6 percent of the shortages reported between January 1, 2011, and June 30, 2013, were due to increased demand. An increase in demand, which may occur for a variety of reasons, such as the approval of an already marketed drug for a new indication or new therapeutic guidelines, can trigger a shortage. A shortage results because manufacturers cannot keep up with the increase in demand that exceeds their expectations or planned production. Increased demand was cited as a cause of shortages in 9 of the 20 studies we reviewed. Figure 6 summarizes information reported by manufacturers to FDA about the causes of drug shortages that the agency then analyzes and categorizes. The inability of other manufacturers to make up for supply disruptions experienced by their competitors due to constrained manufacturing capacity was another immediate cause of shortages. Several of the studies we reviewed generally concluded that the heavy concentration of the generic drug industry leaves few manufacturers available to respond to supply disruptions, leading to market-wide shortages. One study found that seven manufacturers dominate the generic sterile injectable market overall and also found that this market is even further concentrated for specific therapeutic classes. Specifically, this analysis indicated that in 2008, three manufacturers produced 71 percent of all generic sterile injectable oncology drugs and that three manufacturers held 91 percent of the market share of generic sterile injectable nutrients and supplements. Illustrating the interplay between supply disruptions and constrained manufacturing capacity, one 2010 study reported that a shortage of the generic sterile injectable anesthesia drug propofol resulted after one of the three manufacturers of the drug permanently discontinued its manufacturing and another experienced quality problems leading to a temporary halt in production, leaving the remaining manufacturer unable to meet the demand of the entire market. In addition to consisting of few manufacturers overall, we also found that the manufacturing capacity of the generic drug industry has been further challenged in recent years as the industry has expanded the number of generic products it manufactures. This expansion has resulted as a large number of brand-name drugs have lost patent protection, clearing the way for generic manufacturers to produce generic equivalents of these drugs. Two of the studies we examined cited the decisions of manufacturers to begin producing the generic equivalents of brand-name drugs as contributing to shortages by stretching already limited capacity. For example, one study found that the generic sterile injectable market had expanded by 52 percent between 2006 and 2010 without a commensurate increase in manufacturing capacity, leading to high utilization of available manufacturing capacity.manufacturers said that faced with limited capacity, when new generics are available for production a manufacturer may make the decision to stop producing some drugs to make room for the new products. As a result, such discontinuations could lead to shortages, but representatives from both manufacturers characterized this as a small factor in causing shortages. In addition to the challenge presented by having few manufacturers and the increase in the number of generic drugs, pressures to produce this large number of drugs on only a few manufacturing lines leaves the manufacturers that do participate in the generic sterile injectable market Since multiple drugs are often manufactured on the with little flexibility. same line, increasing production of one drug reduces the supply of other drugs and can lead to shortages. For example, one manufacturer representative said that there are usually anywhere from 30 to 50 different drugs manufactured on a given line. Further, according to one study, manufacturers of generic sterile injectable drugs do not typically have redundant manufacturing facilities.almost 900 generic sterile injectable applications that were submitted to the FDA and approved between 2000 and 2011, the authors found that only 11 applications (about 1 percent) referenced a backup facility. This becomes problematic when production in a given facility must stop for any reason as manufacturers cannot immediately move production of a drug to another facility. To do so, they must first obtain approval from FDA, which can further delay production. Specifically, in a review of A related constraint cited in the literature is that some generic sterile injectable drugs need to be manufactured on lines or in facilities dedicated solely to those drugs. One study noted that certain sterile injectable products, such as anti-infective and oncology drugs, require lines, and sometimes whole facilities, which are limited to the production of such drugs. For example, some anti-infective drugs, such as penicillin, are highly sensitizing and can trigger serious allergic reactions at very low levels and as a result, may be limited to specific manufacturing lines. Further, another study noted that a supply disruption on a dedicated line can result in shortages of multiple products of a similar type, such as oncology drugs, because other manufacturers are not able to step in due to limited capacity on their own lines.example, shortages of oncology drugs in 2011 were linked to just three dedicated oncology lines that were operated by two manufacturers. A final capacity-related constraint cited in the literature (9 of 20 studies), was how the widespread use of “just-in-time” inventory practices can increase the vulnerability of the supply chain to shortages. One of these studies said that most manufacturers only produce enough of a drug to satisfy current demand, so there is little, if any excess inventory, while another study asserts that when a manufacturer has to stop production, a supply disruption can result because of “just-in-time” inventory. One manufacturer representative said that any manufacturer typically has only a limited amount of inventory available. This representative said that manufacturers typically have about 2 to 3 months of inventory on hand, wholesale distributors usually have about 1 month, and providers only have a few weeks of inventory. Consequently, if an issue arises, a shortage can quickly result. The majority of manufacturer representatives we interviewed generally concurred with our finding from the literature that a supply disruption, for whatever reason, affecting one manufacturer can quickly lead to market- wide shortages because other manufacturers often cannot increase production enough to meet demand. For example, one manufacturer representative said that it had recently encountered capacity constraints when two other manufacturers experienced supply disruptions and one manufacturer exited the market. The representative from the manufacturer that remained said that the company did not have the capacity to ramp up production to meet the demand for all of the drugs at risk of shortage and thus had to prioritize which drugs to produce, based on market need and the severity of the shortages. Further complicating a situation like this, representatives of two manufacturers said that even if the remaining companies are able to increase their production of a drug whose supply has been disrupted, it can take time—as much as 3 months—to increase production, particularly for sterile injectables due to the complexity of manufacturing these products. We identified multiple potential underlying causes of drug shortages in the literature. Half of the studies (10 of 20) we reviewed suggested that the immediate causes of drug shortages, such as quality problems, are driven by an underlying cause that stems from the economics of the generic sterile injectable drug market. The studies that cited underlying causes did not all focus on the same underlying cause and manufacturer representatives had mixed views on the potential underlying causes we identified in the literature. One underlying factor we identified in the literature is that when choosing between different manufacturers of the same drug, purchasers may focus primarily on price. Six of the 20 studies mentioned either low prices or low profit margins as features of the generic drug market that may make it vulnerable to shortages. Two of the studies suggested that low profit margins in the generic market may affect manufacturers’ decisions to invest in their facilities.expect all generic drugs to be of equivalent quality and may be unable to Another of the six studies said that purchasers discern differences in the quality of drugs, particularly sterile injectables. As a consequence, purchasers of sterile injectables focus on price when choosing among seemingly identical generic manufacturers at the expense of any potential differences in quality and the ability to reliably meet customer demand. According to this study, a manufacturer that strives to exceed minimum manufacturing standards is not rewarded with a willingness among buyers to pay more for the manufacturer’s products. Therefore, using economic theory as a rationale, the authors suggested that this reduces the incentive for the manufacturer to sufficiently invest in maintenance or quality improvements at its manufacturing establishments. The study suggests that the lack of reward for quality is an underlying cause that may have led to manufacturers’ minimizing investment in establishments, which has ultimately resulted in many of the recent quality problems at generic sterile injectable manufacturing establishments. Five of the six drug manufacturer representatives that responded to this claim reported that manufacturers continue to invest in upgrading existing establishments and building new ones. One manufacturer representative stated that some generic products in manufacturers’ portfolios are highly profitable and prompt investments in manufacturing facilities. Another manufacturer representative said that they continue to invest in making improvements to their sterile injectable facilities. For example, they said that they have invested in spare capacity on some of their lines, and as a result, are now better equipped to ramp up production in response to a shortage. Group purchasing organizations (GPO), which negotiate purchasing contracts with drug manufacturers on behalf of hospitals and other health care providers, have been cited in the literature as potentially having an underlying role in causing drug shortages. Four of the 20 studies suggested that the operating structure of GPOs results in fewer manufacturers producing generic drugs and this, in turn, contributes to a more fragile supply chain for these drugs. For example, one of the four studies asserted that GPOs reduce profits in the generic drug market, where margins are already low. This study states that because of these low manufacturer profit margins, when production problems arise, manufacturers may stop producing certain products in lieu of making investments in improvements at their establishments. Another of the four studies theorized that when generic drug manufacturers fail to win GPO contracts, manufacturers will either exit or decide not to enter the market for those drugs, contributing to the immediate cause of constrained manufacturing capacity. All of the representatives of the three GPOs that we contacted disagreed with the claim that GPOs are a cause of shortages. They emphasized that they have an incentive to avoid drug shortages and ensure that the drug manufacturers with which GPOs contact can meet GPOs’ members’ needs. Further, they said that while price is an important consideration in determining the manufacturers with whom they contract, the ability of manufacturers to ensure an adequate supply of products is critical. According to one GPO representative, generic drug manufacturers are generally profitable, which the GPO representative said demonstrates that manufacturers are not being driven out of the market. All of the GPO representatives also noted that in recent years GPOs have instituted strategies to avoid shortages. For example, one GPO representative told us that it typically tries to contract with two or more manufacturers for drugs that have a recent history of being in shortage. Of the five manufacturer representatives who commented on the claim, three stated that GPOs may contribute to shortages by exerting downward price pressure. However, one manufacturer representative disagreed that GPOs were a cause and a second manufacturer representative said that GPOs had no more of a role in causing shortages than any other supply chain participant. While the second representative said that GPOs contribute to the pressure to lower prices, the representative also noted that every participant in the supply chain contributes to the price competition. A third manufacturer representative noted that, because manufacturers have already made investments in production that they are unwilling to abandon, failing to obtain a GPO contract does not cause them to exit the market for a given drug. Further, representatives from the second and third manufacturers also told us that, in the event that a major manufacturer does not obtain a GPO contract, the manufacturer may send its sales force to hospitals directly and offer a price that is lower than the GPO contract price. Hospitals may either accept this lower priced offer or seek additional price concessions from the contracted manufacturer through the GPO. A change in Medicare Part B drug reimbursement policy was also cited in the literature (5 of 20 studies) as an underlying cause of drug shortages. In 2005, a change was implemented in how providers are reimbursed for most Medicare Part B drugs administered in an outpatient setting. Three studies we reviewed suggested that this change resulted in a sharp decrease in reimbursement to providers.focused on oncology drugs, suggested that this decrease in reimbursement caused providers to switch to higher-cost drugs for which they would receive increased reimbursement, reducing demand for One of the studies, which generics. The other two studies suggested that this decrease in reimbursement to providers also resulted in lower prices for manufacturers. Two of the three studies suggested that manufacturers responded by exiting the market for these products entirely, while the third suggested that manufacturers reduced their investments in manufacturing facilities, both of which left the generic sterile injectable market vulnerable to shortages. Four of the five manufacturer representatives who responded to this claim did not view the change in Part B reimbursement policy as a main cause of shortages, though they said it could have complicated the generic market. For example, one manufacturer representative contended that it had a negligible impact at most as the reimbursement is paid to physicians, not manufacturers. The payments that manufacturers collect are several steps removed from the physician’s reimbursement. In addition, the representative stated that even if some providers switch to more expensive alternative drugs in response to the reimbursement change, the impact would be minimal as the vast majority of generic sterile injectables are administered in hospital inpatient departments, which means that they are not reimbursed through Medicare Part B.Finally, though drugs in at least one of the therapeutic classes that have most frequently been in shortage in recent years may be reimbursed through Part B based on the Average Sales Price methodology, the extent to which all of the therapeutic classes driving recent shortages are reimbursed in this manner is unclear. Figure 7 summarizes the key immediate and potential underlying causes of drug shortages that we found in our review of the literature. Through a variety of efforts, FDA has prevented more potential shortages and improved its ability to respond to shortages since we issued our report in 2011. Among other things, FDA is working to improve its response to drug shortages by implementing FDASIA’s requirements and the recommendations we made in 2011. However, FDA lacks policies and procedures for managing and using information from its drug shortage database. FDA has taken steps that have prevented more potential shortages and improved its ability to resolve existing drug shortages since 2011, including expediting review of ANDAs and supplements, working with manufacturers to increase production, and using its regulatory discretion to allow certain products to remain on the market or bring new products to market. Based on our analysis of FDA data from January 2011 through June 2013, FDA was able to prevent 89 potential shortages in 2011, 154 potential shortages in 2012, and 50 potential shortages through June 2013. This is more than the 35 potential shortages we found that FDA prevented in 2010 and the 50 prevented through June 2011. FDA officials told us that although they relied on many of the same steps to prevent and resolve shortages prior to the enactment of FDASIA, FDASIA’s requirement that manufacturers notify FDA in advance of a potential shortage allowed FDA to employ those steps sooner. FDA officials said the notification requirement has helped the agency become more proactive and successful in its efforts. FDA officials noted there has been a sizeable increase in notifications with a six-fold increase after issuance of the drug shortages Executive Order in October 2011, a subsequent doubling of that rate after the enactment of FDASIA in July 2012, and a return to the Post-Executive Order notification rate in 2013. FDA has expedited a number of agency actions to prevent or resolve shortages, in accordance with relevant FDASIA provisions. For example, FDA may expedite the review of ANDAs, or supplements to NDAs and ANDAs, to help bring an alternative drug to market or authorize an additional API supplier or manufacturing site. FDA officials said the agency has also expedited inspections to facilitate improvements at manufacturing establishments. Expediting inspections that are required before an ANDA or supplement is approved also facilitates the availability of a needed drug. Manufacturer representatives we spoke with noted that in some cases expedited reviews or inspections have happened quickly and have helped prevent shortages. However, others told us that some application reviews or inspections have taken a long time, limiting the manufacturers’ ability to help prevent or resolve a shortage. For example, one manufacturer representative said waiting for FDA’s approval of ANDA supplements related to new raw material suppliers has been a key hindrance to the manufacturer’s ability to respond to drug shortages. In addition, FDA routinely contacts manufacturers regarding their ability to increase production in response to a potential or actual drug shortage. Although a number of manufacturer representatives said that ramping up production takes time and may not always be possible, given production capacity constraints, FDA has reported some successes. For example, when FDA determined that an impending product discontinuation might result in a shortage of a drug that treats shingles and chickenpox, it encouraged another manufacturer to increase production, thus avoiding a shortage. Similarly, when quality problems were identified in a drug used to treat eye infections in patients with acquired immune deficiency syndrome, FDA reached out to another manufacturer that was able to increase production to avert a potential shortage. Manufacturer representatives said manufacturers will generally increase production, if possible, when FDA advises them of a shortage. FDA reported to us that, from January 1, 2011, to June 30, 2013, its encouragement to manufacturers to increase production helped prevent or resolve 41 shortages. FDA officials said that in appropriate cases, the agency may attempt to use its regulatory discretion to keep products from going into short supply or from making an active shortage worse. FDA may use its discretion in deciding whether to take a certain action. FDASIA requires FDA to consider whether an enforcement action or issuance of a warning letter could reasonably cause or exacerbate a shortage of a life-saving drug.If FDA reaches such a determination, the agency must evaluate the risks associated with the impact of such a shortage upon patients and the risks associated with the violation before taking action, unless there is an imminent risk of serious health consequences or death. FDA officials said that they had used their regulatory discretion prior to the enactment of FDASIA and were continuing to do so, through communication across various FDA offices and with manufacturers. Officials noted that they try to balance the risk to patients when making their decisions. That is, they consider the risk of allowing the continued distribution of the product— despite the problems related to the possible enforcement action or warning letter—against the public health risks of the product not being available. FDA officials said they also continue to use their regulatory discretion to temporarily allow the importation of “unapproved drugs” into the United States to help prevent or resolve shortages of FDA-approved drugs that are critical to patients, in rare cases where the shortages cannot be resolved by manufacturers willing and able to supply the FDA- approved drugs in the immediate future. We previously reported that FDA had allowed for the importation of seven unapproved drugs from January 2011 through September 2011. FDA officials told us that, through June 30, 2013, they have subsequently allowed for the importation of nine additional unapproved drugs. For example, when the manufacturer of a drug used to treat patients who require total parenteral nutrition lost the use of a manufacturing site, FDA allowed importation of a comparable version of the drug not approved by FDA to prevent a potential shortage from occurring. Several stakeholders commented that FDA’s efforts to allow the importation of unapproved drugs to address a shortage have improved, which has helped to resolve some critical shortages. However, some stakeholders noted that certain shortages could not be resolved quickly because it took a long time for FDA to respond to providers’ requests to allow importation. For example, some stakeholders noted that delays in the importation of total parenteral nutrition products created significant challenges for treating patients who depend upon them. To help speed up the process of temporary importation, FDA officials said that since January 2012 they have proactively identified foreign manufacturers that have expressed a willingness to import their drugs to help with a shortage. Officials said this has allowed them to reach out to companies more quickly and has already helped the agency address one shortage. FDA has also reported using its regulatory discretion in other ways. For example, the manufacturer of a drug that may slow the progress of the human immunodeficiency virus and acquired immune deficiency syndrome lost its component supplier and was forced to find a new one. However, this new supplier was experiencing a quality problem. FDA used its regulatory discretion to allow the manufacturer to use the new component supplier while quality problems were being addressed after it determined those issues posed no significant risks to public health. In another instance, FDA used its regulatory discretion to allow the continued marketing of a drug, despite a manufacturing deviation, after determining the benefits of having the drug available outweighed the risk associated with the manufacturing error. FDA is taking steps to further enhance its ability to respond to shortages. Some of the agency’s actions are required by FDASIA, some are in response to recommendations we made in 2011, and others were initiated by the agency. For example, FDA has established the Drug Shortages Task Force as required by FDASIA. FDA officials said the Drug Shortages Task Force has helped FDA revise internal policies and procedures, track the development of the proposed regulations for implementing the manufacturer notification requirements, and generally coordinate across the agency on issues related to drug shortages. As required by FDASIA, FDA officials also noted that they have continued to work with DEA on shortages related to controlled substances. describes how providers and others can report a potential drug shortage. The e-mail address and toll-free number listed on the website are the main mechanisms through which FDA receives drug shortage-related reports from health care providers or other third-party groups. FDASIA requires FDA to maintain an up-to-date list of drugs that it determines to be in shortage and—subject to public health, trade secret, and confidentiality concerns—make the list publicly available. FDA officials told us that, upon receiving notification of a potential drug shortage, the agency works to verify whether a shortage exists through contacting the drug’s manufacturer to determine supply levels and comparing that to industry sales data on historical demand for the product. Once FDA determines that the amount of the drug—or pharmaceutical equivalents—appears to be insufficient to meet demand, the agency posts information about the shortage on its drug shortages website. Though FDA takes steps to respond to all shortages about which the agency is informed, officials said the agency places the highest priority on responding to shortages of drugs that it considers medically necessary. Nevertheless, officials said that FDA’s website includes all verified shortages, regardless of whether the drug is determined to be medically necessary. However, FDA officials noted that the agency may not be notified of all potential shortages, because FDASIA only requires manufacturers to report disruptions in the production of drugs that are life supporting, life sustaining, or used to treat debilitating health issues. Though health care professionals can also notify FDA about potential shortages, FDA officials told us that the agency is less likely to be notified about shortages of drugs for which there are easy substitutes or little patient impact. In July 2012, as required by FDASIA, FDA began classifying the reasons for shortages using standardized terminology specified in the law and posting this information on its website along with information on estimated shortage duration. Also, though not required by FDASIA, in July 2013 FDA added information on the therapeutic categories of drugs in shortage to its drug shortages website. This change allows users, for example, to view all oncology shortages in one place, rather than having to review the entire list of drugs in shortage and identify individual drugs used in oncology themselves. FDA plans to improve the functionality of the website further by allowing users to sort shortages by other types of information as well. A number of stakeholders noted that the improvements to FDA’s drug shortage website help them keep informed about drug shortages. However, some expressed disappointment with the passive nature of the website as stakeholders must proactively visit the website as opposed to receiving automated alerts. A number of stakeholders noted that notifications of shortages by therapeutic class would be particularly helpful for communicating the potential for a shortage to targeted groups earlier. FDA officials said that they plan to add active alerts by therapeutic category to the website. Although FDA continues to refine the information on its website, it is nonetheless dependent on what manufacturers report to the agency: the reported reasons for a shortage and the estimated length of the shortage. The information FDA receives from manufacturers and other sources may be incomplete and may change over time. As a result, the information on the website may not always be current and accurate. Stakeholders noted that the reasons given for the shortages are often categorized as “other” which can make it difficult to understand why a drug is in shortage. FDA officials said they use the category “other” when none of the classifying terminology required by FDASIA directly applies, although FDA officials said they try to include available details to help explain the cause of the shortage. In addition, FDA may not be able to publicly post all of the information the agency receives from manufacturers because some of the information provided to FDA is proprietary. Although some stakeholders reported that information on the duration of a shortage is one of the most useful pieces of information that can be provided, others noted that the estimated shortage resolution dates on FDA’s website are not always reliable. Two stakeholders said that the estimated duration for a shortage is often listed as “unavailable” or “to be determined,” which is not particularly helpful. Stakeholders noted that such inaccuracies may limit their ability to plan ahead. For example, a representative of one provider group noted that in order to plan for the multiple rounds of a patient’s chemotherapy regimen, the provider would need to be sure that there will be a sufficient supply of the drug for the second round of chemotherapy before starting the first. Manufacturer representatives said the complexity of a manufacturing disruption often makes it difficult to provide FDA accurate estimates of the time it will take to resolve the disruption. FDA has also taken steps to respond to the recommendations we made in our 2011 report. In response to our recommendation that FDA develop an information system that would allow drug shortage data to be tracked in a systematic manner—to be consistent with the internal control standards for the federal government—the agency developed a drug shortage database that is used on a daily basis to track shortages, document the actions FDA takes to prevent and resolve shortages, and monitor the workload of DSS personnel. All FDA offices can access the database; however, the officials we spoke with said they request information from DSS instead of accessing the database directly. In September 2013, FDA informed us that it is now planning to transition from its existing database to an information system with additional capabilities and functionality. For example, FDA officials said they are planning to automate some of the data fields by extracting information from other sources that provide NDCs, market share, and other relevant product information. This may reduce the likelihood of manual entry errors and speed up the entry of some shortage information. The officials said the establishment of an information system could also help facilitate analysis related to drug shortages. However, they were unable to provide us with a description of the types of analyses they would conduct. FDA has also taken steps to respond to our recommendation related to the resources allocated to the drug shortage program. FDA has since increased the number of DSS personnel from 4 in 2011 to 11 in 2013 and officials said this has improved FDA’s ability to respond to drug shortages in a number of ways. First, it allowed FDA to assign each manufacturer experiencing a shortage a specific contact person, which FDA officials said has allowed the agency and the manufacturers to develop better working relationships and has improved information sharing. Representatives from one manufacturer we spoke with agreed that this effort has improved their relationship with FDA. In addition, a number of stakeholders, including other manufacturers’ representatives, noted it is now easier to contact DSS officials and that discussions have become more regular. Second, FDA officials said having additional staff has allowed them to respond more quickly to manufacturer notifications and to identify possible approaches to preventing or resolving a shortage. Some stakeholders also noted that FDA reached out to them for additional information on specific drug shortages or the availability of certain drugs. Third, officials said it has allowed DSS to play a bigger role in revising drug shortage policies and procedures. FDA also improved the staffing resources available for responding to drug shortages by assigning drug shortage coordinators in each of its 20 district offices. In addition, it developed written procedures to enhance coordination between headquarters staff in DSS, the CDER Office of Compliance, and staff in the district offices on issues related to drug shortages.have helped bring drug shortage-related concerns to light earlier, such as violative inspections at establishments that manufacture a large volume of drugs. Officials said this has improved FDA’s ability to work with such FDA officials told us that the drug shortage coordinators manufacturers early in order to prevent drug shortages. FDA held a retreat in July 2012 to educate the drug shortage coordinators and other staff on FDA’s processes for responding to drug shortages. The retreat included a number of FDA offices, including CDER Office of Compliance, DSS, Office of Generic Drugs, Office of New Drug Quality Assessment, and Office of Regulatory Affairs, and officials said the retreat helped attendees understand drug shortage responsibilities of the various FDA offices. As required by FDASIA, FDA’s Drug Shortages Task Force developed a strategic plan that identifies its goals and priorities for mitigating and resolving ongoing shortages and for preventing future shortages. This is also in line with our 2011 recommendation that FDA ensure that the agency’s strategic plan articulates goals and priorities for maintaining the availability of all medically necessary drugs. Though FDA officials said the agency has not made this change in the agency-wide strategic plan, FDA’s drug shortages strategic plan includes two goals related to maintaining drug availability, each with a number of tasks for achieving the goal. The first goal—to improve and streamline FDA’s current mitigation activities once the agency is notified of a supply disruption or shortage—includes four tasks: streamline internal FDA processes; improve data and response tracking; clarify roles and responsibilities of manufacturers; and enhance public communication about drug shortages. The second goal—to develop prevention strategies to address the underlying causes of production disruptions to prevent drug shortages— contains three tasks: develop methods to incentivize and prioritize manufacturing quality; use regulatory science to identify early warning signals of shortages; and increase knowledge to develop new strategies to address shortages. As part of this second goal, the strategic plan describes efforts FDA is considering to help address manufacturing and quality issues, including broader use of manufacturing metrics to assist in the evaluation of manufacturing quality and developing incentives for high-quality manufacturing. Finally, FDA officials said that their annual report on drug shortages, which was due December 31, 2013, will contain information on performance measures to assess and quantify the implementation of the agency’s goals and response to drug shortages, as we recommended in 2011. As of January 31, 2014, the annual report has not been released. While FDA is planning on establishing a new information system to track drug shortage data, it lacks policies, procedures, and specific training materials related to management and use of its existing drug shortage database. While FDA did create a database glossary, which briefly defines a number of the data fields, an official told us that no other documents or training materials have been created because staff use the existing database every day and are therefore familiar with its operation. Further, while FDA officials said they plan to create policies for entering data in the planned new drug shortage information system and create a tutorial for users, they have not yet done so. This lack of documentation may limit the agency’s ability to communicate proper use of the existing and new databases to staff and could also ultimately lead to inconsistencies in the use of the database. The lack of policies and procedures is also inconsistent with internal control standards for the federal government, which state that agencies should have controls over information processes, including procedures and standards to ensure the completeness and accuracy of processed data. For example, internal controls require the appropriate documentation of system controls and that such documents be readily available for review. Such documentation may include management directives, administrative policies, and operating manuals; none of which have been prepared for the existing database. Related to FDA’s lack of policy and procedures for its existing drug shortage database, we also found that FDA lacks sufficient controls to ensure the quality of the data in the existing database. For example, FDA officials said there are no automated data checks to ensure the accuracy of the data in the database. Instead, officials review the data for accuracy at the end of each year by relying on their memories of events, emails, and meeting notes. The first such data check was completed in 2012. Officials said they plan to perform another such review at the end of 2013, in preparation for the annual report to Congress. This practice is inconsistent with the internal control standards for the federal government, which require agencies to design controls, which may include data checks that help ensure completeness, accuracy, and validity of database entries. Without such data checks, FDA’s existing database may be more likely to have errors, incomplete data, and inconsistent data. We asked officials to provide us with any documentation of their 2012 review of the existing database for accuracy and they were unable to do so. FDA officials said they plan to incorporate automated data checks in their new information system, which may eliminate the need for subsequent manual quality checking. FDA officials told us that, as of January 2014, any new drug shortages will be entered into their new information system. In addition, FDA has not conducted routine analyses of its existing drug shortage database to identify, evaluate, and respond to the risks of drug shortages proactively. Again, according to the internal control standards for the federal government, agencies should comprehensively identify risk through qualitative and quantitative methods, including data collected in the course of their work. FDA’s drug shortages strategic plan states that the agency will explore using risk-based approaches to identify early warning signs of problems that could lead to production disruptions. However, FDA currently uses data on an ad hoc basis to respond to specific shortages as opposed to using the data to identify trends or patterns that may help it predict and possibly prevent shortages. According to FDA officials, other than producing the annual report required by FDASIA, the agency has not established regular schedules for generating reports in the database and is not currently using the database to conduct regular trend analyses. By only using the database to respond to individual shortages as they occur, FDA is missing opportunities to use the data proactively to enhance the agency’s ability to prevent and mitigate drug shortages. FDA has made progress in preventing potential drug shortages and responding to actual shortages since we issued our last report in 2011. In part, this progress can be attributed to the new FDASIA requirement that manufacturers provide FDA with information about potential or current shortages of drugs that are life supporting, life sustaining, or used to treat debilitating health issues. This additional information has improved the agency’s ability to act more quickly when it learns of a potential shortage. Yet, the number of shortages remains high, despite the fact that FDA has taken steps to prevent and mitigate shortages, such as expediting application reviews and inspections, exercising enforcement discretion in appropriate cases, and helping manufacturers respond to quality problems. Many shortages are prolonged, with some spanning multiple years. As a result, patients and providers continue to struggle as essential and life-saving medications—such as anti-infective, nutritive, and cardiovascular drugs—remain in short supply. These shortages complicate patient care and may lead to adverse outcomes with serious consequences. Although there are potential underlying causes of drug shortages, FDA has made important strides in responding to some immediate causes. However, some of the causes identified in our literature review and conversations with manufacturers are beyond the agency’s authority, as it does not have control over private companies’ business decisions. For example, FDA is unable to require manufacturers to start producing or continue producing drugs, or to build redundant manufacturing capacity, regardless of the severity of a shortage. Nonetheless, FDA can take steps to maximize the agency’s ability to use the information at its disposal to address drug shortages. We continue to believe in the importance of our prior recommendation that FDA should develop an information system that would facilitate the agency’s response to shortages. FDA took the first step in implementing this recommendation by creating a database on drug shortages. However, a key component of any system is assuring the reliability of the data. Our current work shows that the agency lacks adequate policies and procedures governing the use of its database, as well as sufficient checks to ensure the data’s reliability—in both cases, the failure to do so is inconsistent with internal controls for the federal government. These shortcomings could hinder FDA’s efforts to understand the causes of shortages as well as undermine its efforts to prevent them from occurring. Additionally, FDA’s ability to manage risk- based decisions, including when to use regulatory discretion, and proactively help prevent and resolve shortages may be hindered by its lack of routine analysis of the data it collects. FDA may be missing an opportunity to identify causes of shortages, risks for shortages, and patterns in events which may be early indicators of shortages for certain types of manufacturers, drugs, or therapeutic classes. Though FDA has taken important steps to better prevent and address shortages, the large number of potential shortages itself suggests a market still at risk of continuing supply disruptions. To enhance its oversight of drug shortages, particularly as the agency fine-tunes the manner in which it gathers data on shortages and transitions from its database to a more robust system, we recommend that the Commissioner of FDA take the following two actions: develop policies and procedures for the use of the existing drug shortages database (and, ultimately, the new drug shortages information system) to ensure staff enter information into the database in a consistent manner and to ensure the accuracy of the information in the database; and conduct periodic analyses using the existing drug shortages database (and, eventually, the new drug shortages information system) to routinely and systematically assess drug shortage information, and use this information proactively to identify risk factors for potential drug shortages early, thereby potentially helping FDA to recognize trends, clarify causes, and resolve problems before drugs go into short supply. We provided a draft of this report for comment to HHS, the Department of Justice, and the Federal Trade Commission. We also provided excerpts of this report for comment to the Department of Defense, the Department of Homeland Security (for review of the U.S. Coast Guard), the Department of Veterans Affairs, and UUDIS. We received written comments from HHS, which are reproduced in appendix V. We also received technical comments from HHS, the Department of Defense, the Federal Trade Commission, and UUDIS, which we incorporated as appropriate. The Department of Homeland Security, the Department of Justice, and the Department of Veterans Affairs did not have any comments based on their review. In its comments, HHS stated that drug shortages remain a significant public health issue and emphasized its commitment to preventing new shortages and resolving those that are already ongoing. HHS agreed with our recommendations to enhance its oversight by developing policies and procedures for its drug shortages database and by conducting periodic analyses of these data to identify drug shortage risk factors. Regarding our first recommendation, HHS said it agrees that policies and procedures for data entry are important to help assure the timely, accurate, and consistent inputting of data into its drug shortage database. Regarding our second recommendation, HHS agreed that it could make better use of its drug shortage data to identify trends, clarify causes of shortages, and resolve problems before drugs go into short supply. However, HHS noted that there are many factors that can trigger or exacerbate a shortage and that it lacks some relevant data, such as detailed information on manufacturing capability, to create a comprehensive forecasting system for drug shortages. We acknowledge that the agency’s access to certain information is limited, but believe that routine analysis of available data could nonetheless reveal some early indicators of shortages. Although HHS agreed with our recommendations, it took issue with our use of UUDIS data concerning the persistence of recent shortages. HHS said that these data may overstate the number of shortages that persist because UUDIS considers a shortage to be ongoing unless all NDCs for a given product are available, even if some manufacturers that currently produce the drug have increased production enough to meet all demand. We recognize that there are differences in the way UUDIS and FDA define, and therefore count, shortages. Our report notes that FDA considers a shortage to be resolved when the total supply of the drug and any pharmaceutical equivalents is sufficient to meet demand in the market overall. UUDIS defines shortages more broadly, focusing on supply issues that affect how pharmacies prepare and dispense a product or that influence patient care when prescribers must choose an alternative therapy because of supply issues. According to a UUDIS official, tracking all NDCs for all manufacturers is important for providers because using substituting one package size for another may create a safety issue. To enhance clarity, we have provided additional detail in our report to describe UUDIS’s methods for defining and tracking shortages. Moreover, it is important to note that we used UUDIS data because FDA was unable to provide data on shortages that would allow for an analysis of trends. As we have previously reported, until FDA established a database containing shortage information in 2011, the agency did not systematically maintain data on shortages. In the absence of FDA data, the data from UUDIS was the only data that we could identify that would allow for a meaningful analysis of drug shortages over time. We are sending copies of this report to the Secretary of the Department of Health and Human Services, the Attorney General, the Chairman of the Federal Trade Commission, appropriate congressional committees, as well as other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who make key contributions to this report are listed in appendix VI. As part of our report objectives, we reviewed the trends in recent drug shortages and examined the causes of drug shortages. This appendix provides further detail on our methods. To review the trends in recent drug shortages, we identified the number of drugs that were in short supply from January 1, 2007, through June 30, 2013, and examined the characteristics of drugs that were reported to be in shortage from June 1, 2011, through June 30, 2013. Specifically, to review trends in recent drug shortages that occurred from January 1, 2007, through June 30, 2013, we analyzed data from the University of Utah Drug Information Service (UUDIS), which were the most recent data available at the time we did our work. These data are generally regarded as the most comprehensive and reliable source of drug shortage information for the time period we reviewed and are what we used in preparing our 2011 report. We focused our analysis on shortages of prescription drugs.(1) the total number of new shortages reported each year and (2) the total We reviewed UUDIS’ drug shortage data to identify number of active shortages each year. To calculate the total number of new shortages reported each year, we counted shortages only for the year in which UUDIS was first notified and not in any subsequent years during which the shortage may have been active. To calculate the number of active shortages in each year, we included both shortages reported that year and any shortages that had started in a prior year, but We also identified the duration of any were still ongoing during the year.shortages reported from January 1, 2007, through June 30, 2013, and the number of drugs that had been in short supply on more than one occasion. To identify drugs that had been in short supply more than once, we grouped together shortages of clinically interchangeable versions of a drug that were administered through the same route, such as injection. We confirmed our grouping of these shortages with a knowledgeable pharmacist from UUDIS. To analyze the characteristics of shortages, we reviewed 219 drug shortages that were newly reported between June 1, 2011, and June 30, 2013, and that UUDIS identified as critical. These critical shortages were a subset of the total number of shortages reported during this time. Specifically, these 219 shortages represented 57 percent of the 382 shortages reported between June 1, 2011, and June 30, 2013. UUDIS identified these shortages as critical because alternative medications were unavailable, the shortages affected multiple manufacturers, or it received multiple reports from different institutions. For these critical shortages, we obtained drug shortage bulletins created by UUDIS, which contain the national drug codes (NDC) associated with each shortage. Using these NDCs, we analyzed Red Book data to determine the product types, routes of administration, and therapeutic classes of the critical shortages. We reviewed all UUDIS data for reasonableness, outliers, and consistency, and determined that the data were sufficiently reliable for our purposes. To examine the causes of recent drug shortages, we conducted a structured search of research databases using various combinations of relevant search terms including, “drug”, “shortage”, “supply”, “medication”, and “generic” to identify any literature published from January 1, 2003, through June 30, 2013, that reported on the causes of drug shortages. We then reviewed the abstracts for 714 articles and the full-text of 176 of those articles to determine whether they addressed the causes of drug shortages and met our inclusion criteria. Our inclusion criteria included journal articles and government publications, as well as policy briefs or papers, in which the causes of drug shortages were examined through the presentation of original research. Because there is not a large volume of peer-reviewed literature that incorporates original research, we also included articles that provided an in-depth discussion of the causes of drug shortages. However, we excluded editorials and news wire articles from our review. Finally, we included directly relevant studies to which we were referred by stakeholders, but which did not appear in our initial search. Based on these steps, we identified 20 articles that were published between March 1, 2005, and March 31, 2013, and then summarized the causes of shortages on which these articles reported.While our search criteria were for shortages of all drug types, the majority of the articles we identified were focused on generic sterile injectables, which have frequently been in shortage in recent years. For the purposes of reporting on our literature review, we identified and summarized the causes frequently discussed in the literature and did not list all topics mentioned in each article we reviewed. Some causes that were mentioned only sparingly in the literature were not included in our review. American Society of Health-System Pharmacists. “ASHP Guidelines on Managing Drug Product Shortages in Hospitals and Health Systems.” American Journal of Health-System Pharmacy, vol. 66, no.15 (2009): 1399-1406. Balkhi, B., L. Araujo-Lama, E. Seoane-Vazquez, R. Rodriguez-Monguio, S. L. Szeinbach, and E. R. Fox. “Shortages of Systemic Antibiotics in the U.S.A.: How Long Can We Wait?” Journal of Pharmaceutical Health Services Research, vol. 4, no. 1 (2013): 13-17. Born, K. “Time and Money: An Analysis of the Legislative Efforts to Address the Prescription Drug Shortage Crisis in America.” The Journal of Legal Medicine, vol. 33, no. 2 (2012): 235-251. Department of Health and Human Services. Office of the Assistant Secretary for Planning and Evaluation, Economic Analysis of the Causes of Drug Shortages. Washington, D.C.: October 2011. Department of Health and Human Services. Food and Drug Administration. A Review of FDA’s Approach to Medical Product Shortages. Silver Spring, Md: October 2011. Dorsey, E. R., J. P. Thompson, E. J. Dayoub, B. George, L. A. Saubermann, and R. G. Holloway. “Selegiline Shortage: Causes and Costs of a Generic Drug Shortage.” Neurology, vol. 73, no. 3 (2009): 213- 217. Gatesman, M. L., and T.J. Smith. “The Shortage of Essential Chemotherapy Drugs in the United States.” The New England Journal of Medicine, vol. 365, no. 18 (2011): 1653-1655. Gehrett, B. K. “A Prescription for Drug Shortages.” JAMA: The Journal of the American Medical Association, vol. 307, no. 2 (2012): 153-154. Graham, John. R. The Shortage of Generic Sterile Injectable Drugs: Diagnosis and Solutions. Midland, Mich.: Mackinac Center for Public Policy, June 2012. Griffith, M. M., A. E. Gross; S.H. Sutton, M.K. Bolon. J. S. Esterly, J. A. Patel, M. J. Postelnick, T. R. Zembower, and M. H. Scheetz. “The Impact of Anti-infective Drug Shortages on Hospitals in the United States: Trends and Causes.” Clinical Infectious Diseases, vol. 54, no. 5 (2012): 684-691. Hoffman, S. “The Drugs Stop Here: A Public Framework to Address the Drug Shortage Crisis.” Food and Drug Law Journal, vol. 67, no. 1 (2012): 1-22. Jensen, V., R. Kimzey, and J. Saliba. “An Overview of the FDA’s Drug Shortage Program.” Pharmacy and Therapeutics, vol. 30, no. 3 (2005): 174-175 & 177. Jensen, V., and B. A. Rappaport. “The Reality of Drug Shortages—The Case of the Injectable Agent Propofol.” The New England Journal of Medicine, vol. 363, no. 9 (2010): 806-807. Johnson, P. J. “The Ongoing Drug Shortage Problem Affecting the NICU.” Neonatal Network, vol. 31, no. 5 (2012): 323-327. Kweder, S.L., and S. Dill. “Drug Shortages: The Cycle of Quantity and Quality.” Clinical Pharmacology & Therapeutics, vol. 93, no. 3 (2013): 245-251. Schweitzer, S. O. “How the U.S. Food and Drug Administration Can Solve the Prescription Drug Shortage Problem.” American Journal of Public Health, vol. 103, no. 5 (2013): e10-e14. U.S. House of Representatives. Committee on Oversight and Government Reform. Staff Report. “FDA’s Contribution to the Drug Shortage Crisis.” (Washington, D.C.: June 2012). Ventola, C. L. “The Drug Shortage Crisis in the United States: Causes, Impact, and Management Strategies.” Pharmacy and Therapeutics, vol. 36, no.11 (2011): 740-742 & 749-757. Woodcock, J., and M. Wosinska. “Economic and Technological Drivers of Generic Sterile Injectable Drug Shortages.” Clinical Pharmacology & Therapeutics, vol. 93, no. 2 (2013): 170-176. Yurukoglu, A. “Medicare Reimbursements and Shortages of Sterile Injectable Pharmaceuticals.” National Bureau of Economic Research Working Paper No.17987 (2012). We also interviewed, and in some cases, obtained written responses from, manufacturers and group purchasing organizations (GPO) regarding the causes of drug shortages identified through our literature review because the reported causes we identified were related to the role that these stakeholders have in the drug supply chain. We interviewed three leading national associations representing drug manufacturers, both brand-name and generic, and five generic sterile injectable manufacturers. Specifically, we selected the top three manufacturers of generic sterile injectables between 2010 and 2012. We also selected two additional manufacturers, which were among the manufacturers associated with the highest number of shortages, according to a 2011 report by the IMS Institute for Healthcare Informatics. We provided manufacturers and manufacturer associations with a list of potential causes based on our review of the literature and asked them to comment on each cause either through interviews or in writing. Finally, we selected the three largest GPOs based on their self-reported purchasing volume in fiscal year 2011 and asked each to comment on causes in writing. We also analyzed FDA data on the reported causes of shortages for all shortages that it was notified about from January 1, 2011, through June 30, 2013. All data came from the database that FDA has developed to track shortages and reflects information reported by manufacturers to FDA that is subsequently analyzed and categorized by the agency. FDA defines a shortage as when the total supply of a drug and any pharmaceutical equivalents is inadequate to meet demand. FDA’s definition of a shortage differs from UUDIS’ and UUDIS also tracks shortages that do not meet FDA’s definition of a shortage. For example, according to FDA officials, UUDIS will track shortages that only affect one manufacturer, even if other manufacturers of the same drug have supply available. FDA, however, will not consider such a situation to be a shortage if the other manufacturers that can supply the drug can meet national demand. We interviewed FDA Drug Shortages Staff about the data and reviewed it for reasonableness, outliers, and consistency, and for our purposes, we determined that the data were sufficiently reliable. 2. Novation, LLC 3. Premier, Inc. In the event of a drug shortage, providers who are unable to obtain drugs from their regular wholesale distributors may resort to purchasing drugs through distribution channels that were not authorized by the manufacturer, referred to as the gray market. Gray market suppliers typically obtain small quantities of a drug that is in short supply and offer it for purchase at an inflated price. Because the origin of gray market drugs may be unknown, there is no guarantee of the drug’s pedigree or assurance that it was stored and transported appropriately, potentially putting patients at risk. This appendix describes steps federal agencies have taken in response to activities associated with the gray market for shortage drugs. To identify steps that federal agencies have taken, we interviewed officials from FDA, the Department of Justice (DOJ), and the Federal Trade Commission (FTC); reviewed federal laws and regulations, including an Executive Order on reducing prescription drug shortages issued on October 31, 2011; and examined agency documents. Among other things, the October 31, 2011, drug shortages Executive Order directed FDA to communicate to DOJ any findings by FDA that shortages have led market participants to stockpile shortage drugs or sell them at exorbitant prices. The Executive Order also directed DOJ to determine whether these activities violate federal law, and if so, to take appropriate enforcement actions. 21 U.S.C. § 355e. Counterfeit drugs, which are defined in law at 21 U.S.C. § 321(g)(2), include, for example, those sold under a product name without proper authorization— where the drug is mislabeled in a way that suggests that it is the authentic and approved product—as well as unauthorized generic versions of FDA-approved drugs that mimic trademarked elements of such drugs. Diverted drugs are legitimate drugs that are illegally bought, sold, or otherwise circulated outside of the legal distribution system that has been established to ensure safety and quality. Diversion can involve such activities as illegal sales of prescription drugs by physicians, patients, or pharmacists; prescription forgery; or pharmacy theft. drugs that violate FDCA requirements. DOJ, often in consultation with FDA, may bring civil and criminal actions for such violations. FTC and DOJ’s Antitrust Division are responsible for enforcing federal antitrust laws, which are designed to preserve and protect market competition. Federal Trade Commission Act, and the Clayton Act. The Sherman Act, enforced by DOJ, prohibits monopolization and restraints of trade, and civil and criminal penalties may be imposed for violations of the act. The Federal Trade Commission Act, enforced by FTC, bans unfair methods of competition and unfair or deceptive acts or practices. For example, collusion by drug manufacturers to set prices may violate both the Sherman Act and the Federal Trade Commission Act. The Clayton Act, jointly enforced by DOJ and FTC, regulates mergers and acquisitions and prohibits those that may substantially lessen competition or create a monopoly and are, therefore, likely to increase prices for consumers.The Federal Trade Commission Act and Clayton Act are civil statutes that do not carry criminal penalties. FTC only has the authority to investigate civil antitrust cases. If the case is criminal in nature, FTC refers it to DOJ. activities. This includes, for example, the mail fraud statute, which makes it a crime to use the U.S. mail to commit a fraud, such as facilitating the sale of a shortage drug with a fake pedigree through the U.S. mail. Consistent with the October 31, 2011, Executive Order on drug shortages, three federal agencies—FDA, DOJ, and FTC—review information concerning possible gray market sales of shortage drugs from a number of sources and have taken other steps to respond to relevant directives contained in the order.agencies told us that their authorities in relation to the gray market are limited. They explained that the selling of shortage drugs by suppliers not authorized by the manufacturer alone, even at exorbitant prices, does not itself violate federal law. Though gray market sales may violate agreements between manufacturers and wholesale distributors, such sales may not violate federal law unless they are made outside the legal distribution system. As a result, there have been no prosecutions or enforcement actions taken by federal agencies solely on the basis of gray market activities. Yet officials from all three federal FDA has compiled gray market solicitations into quarterly reports that it shares with DOJ as part of its response to the Executive Order’s directive to communicate findings that shortages have led to the stockpiling or sale of shortage drugs at exorbitant prices. From January 2012—when FDA first began providing this information to DOJ—through October 2013, FDA shared information on solicitations from 26 different wholesale distributors. According to FDA officials, these solicitations typically originate as e-mails to providers containing advertisements that list the drugs for sale and, in some cases, the prices, which the providers then forward to FDA. FDA officials said that some of the gray market solicitations were for sterile injectables in shortage, including drugs related to cancer treatment, emergency medicine, antibiotics, and nutritive products. For example, one solicitation stated that a wholesale distributor was offering an intravenous multi-vitamin for $785, when the average wholesale price for that same vitamin was $8.61. FDA officials told us that they review the solicitations to determine whether they violate the FDCA, such as a wholesale distributor making false claims about a drug or diverting a drug outside the legal distribution system. FDA has opened a number of investigations in relation to the solicitations to examine whether counterfeiting or diversion had occurred, but did not identify any illegal activity. For example, when the anesthetic propofol was in shortage, FDA did not object to the temporary importation of an unapproved version of the drug, but limited distribution to the manufacturer of the drug. However, in January 2010 FDA initiated two investigations related to complaints that the imported drug was being distributed by wholesale distributors, rather than the manufacturer. In both cases, FDA could not substantiate a criminal violation, so it closed the investigations. FDA officials told us that the FDCA does not prohibit hoarding or stockpiling of shortage drugs or regulate drug pricing. As a result, as of December 2013, FDA had not taken any enforcement action related to the gray market solicitations they reviewed, but had provided information from the solicitations to DOJ. Officials from DOJ told us that, as required by the Executive Order, they review FDA’s quarterly reports for information that could indicate the drug listed was diverted for illegal purposes. For example, DOJ considers whether there is evidence of use of fake pedigrees in violation of the FDCA. DOJ officials noted that the solicitations listed in the quarterly reports sometimes indicate that a drug is being sold for a higher-than- normal price; however, selling drugs at elevated prices alone is not illegal. Such sales may be illegal, for example, if the drugs are bought and sold through diversion from the legal distribution system. DOJ officials told us that as of November 2013, DOJ had not launched any investigations or taken any enforcement actions based on the solicitations listed in the quarterly reports, because, according to DOJ officials, the reports have not indicated that any solicitation was unlawful. According to DOJ officials, based on information obtained separately from the quarterly reports, the agency has launched at least one investigation into activity in which there are indications of the illegal sales of shortage drugs through diversion. FTC staff told us they investigate complaints about the gray market received from the public, as well as complaints referred to them by FDA and Congress, to determine whether an antitrust investigation is warranted. FTC receives complaints from the public through a toll-free telephone number, an email address, and through the U.S. mail.staff told us that they review these complaints to determine whether there is enough information, such as evidence that the company is engaging in any coordinated antitrust behavior in violation of the Federal Trade Commission Act or the Clayton Act, to warrant an investigation. According to FTC staff, as of November 2013 they had not launched any full-phase investigations or taken any enforcement actions related to the pharmaceutical gray market. Though not a full-phase investigation, FTC staff told us that in the fall of 2011 they conducted an initial investigation to determine whether wholesale distributors or other parties were engaged in any conduct that violated federal antitrust laws, such as colluding to hoard drugs in shortage and then selling the drugs at higher FTC prices. However, they were unable to find any evidence that widespread hoarding was occurring. Instead, they found cases where a single wholesale distributor acting alone would buy a few vials of a shortage drug and then sell it at a higher price—a practice that is not illegal. In addition, in response to the Executive Order, federal agencies have worked together in an attempt to respond to gray market activities. In 2012, FDA, DOJ, FTC, and the National Association of Attorneys General convened three meetings to discuss the legal authorities that might apply to the gray market and the activities that each was undertaking related to this issue. Officials told us that in the future, they will meet on an “as needed” basis. FDA officials told us that the agency is considering whether additional legal authorities to help address the pharmaceutical gray market and secure the drug supply chain would be beneficial. Such authorities may include registration and reporting requirements for wholesale distributors, potential prohibitions on wholesale distributors purchasing products from pharmacies, and pedigree and track-and-trace options. FDA officials also noted that gray markets do not cause shortages, but are a symptom of such shortages. To the extent that FDA and other stakeholders address drug shortages, opportunities for gray markets to develop will become more limited. DOJ officials told us that DOJ does not have the authority to address drug pricing and stockpiling of drugs per se, but noted that the agency does have the authority to prosecute suppliers operating outside of the legal distribution system, regardless of the drug’s shortage status or price. Officials did not take a position as to whether additional authority over drug stockpiling and exorbitant pricing is necessary. FTC staff told us that they do not believe additional FTC authority in relation to the pharmaceutical gray market is necessary. They stated that the FTC’s existing enforcement authority would be adequate to take action in relation to the inflated pricing of a shortage drug if such pricing was a consequence of anticompetitive conduct. If, however, the inflated pricing resulted from factors other than anticompetitive conduct, assessment of such issues would be outside the scope of the FTC’s competition expertise. Some have suggested that incentivizing drug manufacturers to address the purported causes of drug shortages could alleviate or prevent such shortages. Proposed incentives include those related to regulatory activities undertaken by FDA or financial incentives that the federal government could provide to manufacturers. Some incentives target immediate causes of drug shortages, such as by rewarding manufacturers for a strong quality record, thereby reducing the likelihood of quality-related supply disruptions, or by increasing redundancy in drug supply chains. Other proposed incentives target underlying causes, such as by increasing manufacturer revenue in order to encourage manufacturers to remain in the market and continue investments in production facilities. In February 2013, FDA published a notice in the Federal Register with a request for comment about its drug shortages task force and strategic plan. Two of FDA’s questions related to incentives: 1c. Are there incentives that FDA can provide to encourage manufacturers to establish and maintain high-quality manufacturing practices, to develop redundancy in manufacturing operations, to expand capacity, and/or to create other conditions to prevent or mitigate shortages? and 2. In our work to prevent shortages of drugs and biological products, FDA regularly engages with other U.S. Government Agencies. Are there incentives these Agencies can provide, separately or in partnership with FDA, to prevent shortages? 78 Fed. Reg. 9928 (Feb. 12, 2013). manufacturer and association representatives and to FDA for comment. We obtained comments from three leading national associations representing drug manufacturers, both brand and generic, and five generic sterile injectable manufacturers. For one incentive related to exempting certain products from Medicaid rebates and 340B discounts, we also obtained comments from relevant stakeholder groups whose members would be affected by this exemption. Expedited and streamlined reviews: Most of the comments submitted by manufacturers in response to FDA’s request for comment proposed expediting or streamlining FDA review of regulatory submissions. Submissions that could be expedited included application supplements related to the approval of redundant manufacturing sites or new drug applications (NDA) or abbreviated new drug applications (ANDA) from manufacturers with a record of quality manufacturing and an adequate risk management plan to prevent shortages.June 30, 2013, there were more than 900 manufacturing supplements to NDAs, more than 5,700 manufacturing and chemistry supplements to ANDAs, and more than 2,700 ANDAs pending review. Therefore, proponents of expediting FDA review of regulatory submissions— including applications and supplements—note that increasing the speed of such reviews could provide an incentive to manufacturers to establish redundant manufacturing capacity to which production could be shifted in the event of manufacturing problems at a primary production facility, thereby avoiding a shortage. Further, by rewarding manufacturers with a history of quality manufacturing, expediting reviews could provide an According to FDA, as of incentive to ensure quality-related production problems—and ensuing shortages—do not occur in the first place. While representatives of the stakeholders we interviewed were generally supportive of this potential incentive, they also identified some limitations. One stakeholder cautioned that the resource-intensive nature of building in redundancy means it is a long-term solution, the implementation of which could hinder efforts to address current shortages. Another stakeholder noted that maintaining redundant manufacturing capacity is expensive and that expedited review alone may not provide enough of an incentive to establish such capacity. FDA officials noted that expediting reviews of regulatory submissions is a tool the agency already uses to address shortages. However, FDA officials cautioned that expanding the pool of submissions eligible for expedited review, without regard to the risk of shortage, could slow down review of all submissions and make expediting reviews meaningless. Representatives from one stakeholder echoed this concern, noting that, though faster than standard review times, in their experience there is already a backlog for review of supplements that have been expedited to address current shortages. Representatives from this stakeholder noted that without additional FDA resources devoted to the review of applications and supplements, making additional regulatory submissions eligible for expedited review would be problematic. FDA officials also noted that, although redundancy can help prevent a shortage if production stops, many shortages are the result of production disruptions driven by failures in manufacturing quality systems. Therefore, FDA officials told us that it is more important to prioritize incentives to improve manufacturing quality systems over those that expand capacity. To that end, as part of its drug shortages strategic plan goal to develop long-term prevention strategies in order to prevent shortages, FDA states that it will continue to expedite reviews to mitigate shortages, including the review of submissions for facility upgrades to improve quality. Flexibility in meeting regulatory requirements: A few manufacturers proposed that FDA could allow for flexibility in meeting regulatory requirements for manufacturers with a strong history of compliance with current good manufacturing practice regulations or robust risk management plans to prevent shortages. For example, they suggest that FDA could reduce the level of agency review for such change notifications as manufacturing site transfers, if the manufacturer had a history of production without quality issues. When proposing such incentives, supporters commented that, as it could allow manufacturers to implement manufacturing changes more quickly, reducing the level of agency review could provide an incentive for quality production. Incentivizing quality production could thus reduce the likelihood of a quality-related supply disruption and shortage. One stakeholder generally supported this approach, as long as all manufacturers were still held to the same standards and any change in requirements was accompanied by FDA guidance on the new approach. FDA officials told us that the agency has issued guidance documents to help identify types of changes after an application is approved that represent a lower risk. They added that the agency is currently exploring new approaches to the review of application products. To ensure that drugs are produced in conformance with federal statutes and regulations, including good manufacturing practice regulations, FDA may inspect the establishments where drugs are manufactured. We previously reported that FDA inspected domestic drug manufacturing establishments about once every 2.5 years and generally inspected foreign manufacturing establishments much less frequently. In part, this difference in frequency of inspection was due to the fact that, at the time, FDA was required to inspect every 2 years those domestic establishments that manufacture drugs in the United States, but there was no comparable requirement for inspecting foreign establishments. GAO, Drug Safety: FDA Has Conducted More Foreign Inspections and Begun to Improve Its Information on Foreign Establishments, but More Progress is Needed, GAO-10-961 (Washington, D.C.: Sept. 30, 2010). respond to issues raised by the FDA investigator conducting the inspection and in terms of production disruptions caused by the inspection itself. As it could reduce costs and disruptions for the manufacturer, if carefully designed so that manufacturers would still be inspected with some frequency, increasing the interval between inspections may provide an additional incentive for compliance with good manufacturing practices, which could reduce the likelihood of manufacturing quality issues and resultant shortages. One stakeholder commented that decreasing inspection frequency could be an effective incentive in the long term, but at present, frequent inspections set a high bar for manufacturers in this industry. FDA officials noted that the agency already considers compliance history as a major factor when determining the frequency of inspection of a manufacturing site. Further, in response to new Food and Drug Administration Safety and Innovation Act (FDASIA) authority, the agency is in the process of establishing a risk- based inspection schedule for all establishments. FDA officials told us that they are considering incorporating additional factors, such as process performance metrics and shortage performance, into their selection model. Transparency regarding compliance status of manufacturing sites: In documents submitted in response to FDA’s request for comment, a few manufacturers proposed increasing the transparency of manufacturing establishment compliance status such as by assigning site scores or an FDA stamp of approval, which could help those engaged in drug purchasing and drug pricing negotiations—including providers, group purchasing organizations, insurers, and consumers—make informed purchasing and pricing decisions. Proponents of this approach suggest that FDA’s provision of such quality metrics could make additional information publicly available for consideration in making purchasing and pricing decisions, thereby giving manufacturers an additional incentive for the highest quality production and making quality-related supply disruptions less likely to occur. Representatives from the stakeholders we spoke with were generally skeptical of this approach. One noted that providers and group purchasing organizations—which are the primary decision makers for sterile injectable purchases, where shortages have recently been concentrated—assume that quality is built in to any FDA-approved drug and may not be able to readily interpret quality metrics. Representatives from one stakeholder told us that FDA has spent extensive time and effort educating prescribers and the public that there is one quality standard for all FDA-approved drugs and that, from this stakeholder’s perspective, further differentiating quality with ratings would diminish confidence in the nation’s drug supply and lead to confusion and mistrust. Representatives from another stakeholder expressed skepticism that the market would respond to such information by allowing for higher prices. Likewise, representatives from multiple stakeholders noted that information about FDA inspections of manufacturing establishments and warning letters are already available online and are presumably already used when making purchasing and pricing decisions. FDA officials confirmed that they currently provide information on the compliance status of manufacturing sites on the agency’s website and added that they are looking for new ways to provide transparency in this area. They cautioned that there are significant questions and issues regarding how to provide more transparent compliance information to the public, such as the fact that FDA cannot disclose either confidential commercial information or trade secret information. Nevertheless, as part of its drug shortages strategic plan goal to develop long-term strategies in order to prevent shortages, FDA states that it is examining the broader use of quality metrics to assist in the evaluation of manufacturing quality. However, the plan also notes that although FDA can make quality information available to the public, including inspection outcomes, recalls, and shortages, buyers ultimately decide whether they will use these data when making purchasing decisions. Guaranteed purchase: A few manufacturers proposed that the federal government guarantee the purchase of a given volume of certain drugs. This would allow manufacturers to ensure capacity for a given production volume regardless of whether there is sufficient market demand. Representatives from one stakeholder that supported this proposal told us that such an incentive might bring more predictability to both the volume of product made and product margins. In turn, this guarantee could create some predictability in a manufacturer’s ability to invest in their facilities, resulting in continued high quality and compliant production. One stakeholder noted that such an incentive may be useful in terms of ensuring the availability of future capacity, but at present would not be an effective tool to address shortages as there is simply no excess capacity available even if the government could guarantee purchase volume. FDA officials noted that establishing such a program would be challenging. For example, identifying a list of drugs eligible for guaranteed purchase would be difficult, because it is hard to predict which drugs are vulnerable to shortage in advance and the particular drugs at risk of shortage may change rapidly. Reduction in fees: Both recently-introduced federal legislation and some manufacturers proposed reducing manufacturer fees to help alleviate or prevent drug shortages. As introduced in the 112th Congress, H.R. 6611 proposed exempting certain drugs from the annual branded prescription drug fee established by the Patient Protection and Affordable Care Act in order to provide an incentive for brand-name drug manufacturers to enter the market to produce a drug in short supply. Proponents of this approach state that a reduction in the annual branded prescription drug fee could induce brand-name companies to re-enter the market. One stakeholder also noted that brand-name manufacturers may have more idle capacity than generic manufacturers, so encouraging them to re- enter a market could be effective in addressing shortages. A few manufacturers proposed a reduction in or waiver of various user fees if the manufacturer demonstrates that they have built redundant capacity into their manufacturing plan.reductions noted that building redundancy into a manufacturing plan is resource intensive and that a fee reduction to help offset these costs could incentivize manufacturers to build redundancy which could help prevent supply disruptions. At the same time, one stakeholder that supported this approach noted that, though user fees add up, reducing such fees would not make a large enough economic difference to impact a manufacturer’s decision to enter or exit a market. FDA officials first noted that any changes to the user fee structure would have to be negotiated with industry and then enacted by Congress. They stated that, although the agency is open to using user fees as a way to prevent shortages and encourage manufacturers to help address a shortage that does arise, there are some uncertainties about doing this. For example, definitions of redundancy are unclear and mechanisms ensuring redundant capacity is not repurposed would need to be developed and enforced. Finally, FDA officials also cautioned that reducing or waiving fees for certain manufacturers could increase fees on other manufacturers. This is because the total amount of user fees FDA collects is fixed in statute and the annual fees assessed against individual manufacturers are determined by dividing the fixed statutory amount by the forecasted number of fee-paying entities. As a result, elimination of, or a reduction in, fees for some parties would effectively transfer these costs to the remaining fee-paying entities. Tax incentives: In order to offset the costs of such investment, some manufacturers proposed tax credits targeted to manufacturers that invest in redundant manufacturing capacity. Multiple stakeholders noted that, given the significant costs associated with new manufacturing establishments, such an incentive would only be effective for manufacturers that already operated such establishments. Representatives from one stakeholder noted that the time, resources, and approvals to create a new manufacturing site are likely to take more than 3 years with associated costs totaling tens of millions of dollars. Therefore, tax credits are a strong incentive for companies to re-invest in existing infrastructure, not necessarily to create new infrastructure. FDA officials told us that modernizing existing facilities to prevent quality and safety issues that lead to shortages can go a long way to prevent shortages even in a system with little redundancy. They noted that such incentives would need to encourage manufacturers to purchase new equipment, renovate facilities, and implement new manufacturing processes and technologies. Changes in drug pricing or reimbursement: As introduced in the 112th Congress, H.R. 6611 proposed changing the reimbursement rate or pricing system for generic sterile injectable products for which there are three or fewer active manufacturers. Such changes are intended to prevent shortages by providing an incentive to manufacturers to continue production in a concentrated market. Specifically, the bill proposed changing the calculation of the Medicare reimbursement rate for generic sterile injectable products for which there are three or fewer active manufacturers from average sales price plus 6 percent to wholesale acquisition cost. It would also exempt such products from Medicaid rebates and 340B discounts. The premise of the Medicare reimbursement change proposal is that basing reimbursement on wholesale acquisition cost will enable manufacturers to adjust their prices to meet supply and demand, which some claim the current reimbursement structure prevents.manufacturers to more readily adjust their prices and achieve a profit, this proposal aims to provide an incentive for manufacturers to remain in the market, thereby preventing further erosion of manufacturing capacity, which could make the generic sterile injectable market even more vulnerable to shortages. Proponents note that this incentive could positively affect manufacturer profit and influence a manufacturer’s decision about participating in the market for a particular drug. One stakeholder cautioned that, in their opinion, using reimbursement as an incentive increases the risk of fraud and abuse. The premise of the Medicaid rebate and 340B discount exemptions proposal is that such rebates exert additional downward pressure on already extremely low prices, thereby limiting manufacturers’ ability to sustain production and upgrade facilities. Removing such rebates and discounts would provide additional revenue to manufacturers, thereby potentially providing them an incentive to remain in the market and maintain manufacturing capacity or to re-enter the market. Proponents state that this incentive may help influence manufacturer margins, thereby providing revenue to invest in production capacity to ensure demand is met. However, some stakeholders caution that these exemptions would increase costs to patients and the government (including increasing drug costs and administrative costs to the government for tracking such an exemption). Representatives from one stakeholder group we interviewed noted that, according to its inquiries, the majority of generic sterile injectable drugs are manufactured by three or fewer manufacturers, in which case nearly all such drugs would be subject to this exemption, whether the drug had ever been in shortage or not. Further, stakeholders noted that generic sterile injectable drugs are often administered in hospital inpatient departments and are therefore not subject to Medicaid rebates, which only apply to outpatient drugs. One stakeholder stated that for the few drugs in this group that are subject to Medicaid rebates, the cost of these drugs is already low, which would result in a minimal financial impact of such an exemption. Finally, one stakeholder stated that 340B discount exemptions would have a minimal influence on drug shortages. In addition to the contact named above, Geri Redican-Bigott, Assistant Director; Katherine L. Amoroso; Zhi Boon; Leia Dickerson; Sandra George; Alison Goetsch; Cathleen Hamann; Rebecca Hendrickson; Eagan Kemp; Sarah-Lynn McGrath; Yesook Merrill; and Leslie Powell made key contributions to this report. Drug Compounding: Clear Authority and More Reliable Data Needed to Strengthen FDA Oversight. GAO-13-702. Washington, D.C.: July 31, 2013. High-Risk Series: An Update. GAO-13-283. Washington, D.C.: February 2013. Drug Shortages: FDA’s Ability to Respond Should Be Strengthened. GAO-12-315T. Washington, D.C.: December 15, 2011. Drug Shortages: FDA’s Ability to Respond Should Be Strengthened. GAO-12-116. Washington, D.C.: November 21, 2011. Drug Safety: FDA Faces Challenges Overseeing the Foreign Drug Manufacturing Supply Chain. GAO-11-936T. Washington, D.C.: September 14, 2011. Food and Drug Administration: Response to Heparin Contamination Helped Protect Public Health; Controls That Were Needed for Working With External Entities Were Recently Added. GAO-11-95. Washington, D.C.: October 29, 2010. Food and Drug Administration: Opportunities Exist to Better Address Management Challenges. GAO-10-279. Washington, D.C.: February 19, 2010. Food and Drug Administration: FDA Faces Challenges Meeting Its Growing Medical Product Responsibilities and Should Develop Complete Estimates of Its Resource Needs. GAO-09-581. Washington, D.C.: June 19, 2009. Human Capital: Key Principles for Effective Strategic Workforce Planning. GAO-04-39. Washington, D.C.: December 11, 2003. Standards for Internal Control in the Federal Government. GAO/AIMD-00-21.3.1 Washington, D.C.: November 1999.
From prolonged duration of a disease, to permanent injury, to death, drug shortages have led to harmful patient outcomes. FDA—an agency within the Department of Health and Human Services (HHS)—is responsible for protecting public health and works to prevent, alleviate, and resolve shortages. In 2011, GAO recommended that FDA should enhance its ability to respond to shortages. In 2012, FDASIA gave FDA new authorities to improve its responsiveness and mandated GAO to study drug shortages. In this report, GAO (1) reviews the trends in recent drug shortages and describes what is known about their effect on patients and providers; (2) examines the causes of drug shortages; and (3) evaluates the progress FDA has made in addressing drug shortages. GAO analyzed data from FDA and the University of Utah Drug Information Service, which is generally regarded as the most comprehensive source of drug shortage information for the time period we reviewed. GAO interviewed officials from FDA and other federal agencies, organizations representing patients and providers, and drug manufacturers. GAO also reviewed the literature, relevant statutes, regulations, and documents. The number of drug shortages remains high. Although reports of new drug shortages declined in 2012, the total number of shortages active during a given year—including both new shortages reported and ongoing shortages that began in a prior year—has increased since 2007. Many shortages are of generic sterile injectable drugs. Provider association representatives reported that drug shortages may force providers to ration care or rely on less effective drugs. The immediate cause of drug shortages can generally be traced to a manufacturer halting or slowing production to address quality problems, triggering a supply disruption. Other manufacturers have a limited ability to respond to supply disruptions due to constrained manufacturing capacity. GAO's analysis of data from the Food and Drug Administration (FDA) also showed that quality problems were a frequent cause. GAO also identified potential underlying causes specific to the economics of the generic sterile injectable drug market, such as that low profit margins have limited infrastructure investments or led some manufacturers to exit the market. While shortages have persisted, FDA has prevented more potential shortages in the last 2 years by improving its responsiveness. Among other things, FDA implemented Food and Drug Administration Safety and Innovation Act (FDASIA) requirements and recommendations GAO made in 2011. FDA has also initiated other steps to improve its response to shortages, such as developing procedures to enhance coordination between headquarters and field staff. However, there are shortcomings in its management of drug shortage data that are inconsistent with internal control standards. For example, FDA has not created policies or procedures governing the management of the data and does not perform routine quality checks on its data. Such shortcomings could ultimately hinder FDA's efforts to understand the causes of specific shortages as well as undermine its efforts to prevent them from occurring. In addition, FDA has not conducted routine analyses of the data to proactively identify and evaluate the risks of drug shortages. FDA should strengthen its internal controls over its drug shortage data and conduct periodic analyses to routinely and systematically assess drug shortage information, using this information to proactively identify drug shortage risk factors. HHS agreed with GAO's recommendations.
IRS is responsible for administering our nation’s voluntary tax system in a fair and efficient manner. To do so, IRS has a staff of about 115,000 employees who work at hundreds of locations in the United States and in several foreign countries. These employees (1) process over 200 million tax returns each year, (2) examine returns to determine whether additional taxes are owed, (3) collect delinquent taxes, and (4) investigate civil and criminal violations of the tax laws. To aid in carrying out these responsibilities, Congress has provided IRS with a broad set of discretionary enforcement powers. These enforcement powers include (1) examining taxpayers’ returns and assessing additional tax, interest, and penalties for underreported income or failure to file a return, (2) enforcing the collection of unpaid taxes by such actions as seizing taxpayers’ property, and (3) conducting criminal investigations of taxpayers and recommending prosecution for violations of the tax laws. In fiscal year 1992, IRS examined over 1 million individual taxpayers’ returns, took about 4.7 million enforced collection actions for delinquent taxes, and initiated over 6,000 criminal investigations. Each of these actions had the potential to create an adversarial relationship between the affected taxpayers and IRS staff. In 1988, concerned about allegations of taxpayer abuse, Congress enacted the Taxpayer Bill of Rights, a law containing numerous provisions to strengthen and clarify taxpayers’ rights in their dealings with IRS. In 1992, additional taxpayers’ rights legislation, identified as “Taxpayer Bill of Rights 2,” was passed by Congress as part of broader tax legislation but was not signed into law by the President. Very similar legislation, still identified as Taxpayer Bill of Rights 2, was introduced in the 103rd Congress as S. 542 and H.R. 22. In addition, some provisions of H.R. 22 were included in H.R. 3419, introduced in November 1993. As of September 1994, Congress had not passed these bills. At the outset, we learned that IRS has a wide range of controls and procedures to govern its relationships with taxpayers. But IRS has neither a specific definition of nor management information on the nature and extent of taxpayer abuse. Thus, it was not possible to select a representative sample of IRS actions to determine if taxpayer abuse had occurred and, if so, to estimate how frequently or attempt to determine if there were patterns of abuse in the many IRS divisions and offices throughout the country. Given the lack of an IRS definition of taxpayer abuse, we found it necessary to develop our own. On the basis of interviews with IRS officials and representatives of tax practitioners and taxpayer advocate organizations, we developed a definition of abuse that encompassed a broad range of situations potentially harmful to taxpayers. We attempted to define abuse from the taxpayer’s point of view, not from IRS’ viewpoint. Therefore, we defined it to include situations in which taxpayers were, or perceived they were, harmed when (1) an IRS employee violated a law, regulation, or IRS’ Rules of Conduct; (2) an IRS employee was unnecessarily aggressive in applying discretionary enforcement power; or (3) IRS’ information systems broke down. By “harmed” we meant primarily financial harm. But, we also recognized and incorporated into our definition the fact that frustration and the resulting burden arising from lengthy delays in resolving problems, time spent in dealing with IRS, and fear of the IRS can be factors in taxpayers’ situations that may contribute to their perception of abuse even though—from IRS’ perspective—the taxpayer may not have been abused. Next, we identified the controls and related measures IRS uses to prevent instances that would meet our definition of taxpayer abuse and to respond to allegations of such instances occurring. We also researched various IRS data sources and focused on Problem Resolution Program files, congressional correspondence files, and internal audit and internal security reports and files to find possible examples of abuse that would fall within our definition. We judgmentally selected 26 such examples and used them to analyze the effectiveness of IRS’ controls and processes to prevent such abuse. While we did not follow up on all 26 examples to determine whether taxpayers were actually harmed by IRS, we cited the circumstances of these examples in our discussions with IRS managers to learn the range of controls in place that should have prevented these circumstances from occurring. We selected these examples without regard to when the incidents occurred, resulting in examples spanning the period 1987 through 1993. However, we evaluated the controls that were in place during the period of our review, from April 1992 to January 1994. To illustrate our approach, we found an example in which an IRS employee, after accepting a cash payment from a taxpayer, stole the cash payment and falsified the document used to credit the taxpayer’s account. This led us to review the adequacy of IRS’ controls over taxpayers’ cash payments. Our review of the controls then led us to a conclusion that they could be strengthened and a recommendation about what should be done. During our review, an allegation of potential taxpayer abuse received considerable media attention because it involved reports of possible improper contacts with IRS by staff of the White House and the Federal Bureau of Investigation (FBI). We included an analysis of both the allegation and the adequacy of IRS’ controls to deal with such contacts in our report. The details of our objectives, scope, and methodology are discussed in appendix I. Appendix II provides a detailed description of IRS’ controls, processes, and oversight offices, as well as recent congressional and IRS initiatives that govern IRS’ interaction with taxpayers. Appendix III provides a summary of the provisions in the 1988 Taxpayer Bill of Rights. Appendix IV is a summary of GAO products that cover issues related to those discussed in this report. The Acting Commissioner of Internal Revenue provided written comments on a draft of this report. Those comments are presented and evaluated on pages 21 to 26 and are reprinted in appendix V. IRS has a wide range of controls, processes, and oversight offices designed to govern how its employees interact with taxpayers. Specifically, IRS has operational controls governing examination, collection, and criminal investigation activities to prevent taxpayer abuse. IRS also has a Problem Resolution Office to handle taxpayer complaints, if a taxpayer feels that these operational controls have broken down. In addition, IRS’ Internal Security Division investigates taxpayer complaints involving potential criminal misconduct by IRS employees. In recent years, legislation and IRS initiatives have aided taxpayers in dealing with IRS. In 1988, Congress passed the Taxpayer Bill of Rights (P.L. 100-647) containing numerous provisions that expanded taxpayer rights. IRS has begun quality management, ethics and integrity, and tax systems modernization initiatives, as well as a limited collection appeals project. And, a key element of IRS’ current strategy is emphasis on treating taxpayers as “customers.” All of these initiatives should help IRS to better serve taxpayers and to prevent their mistreatment. Despite IRS’ efforts to prevent violations of taxpayers’ rights, we found various instances of what we consider to be taxpayer abuse by IRS. Some instances involved situations in which IRS employees violated either the law or IRS’ Rules of Conduct and the taxpayer abuse may have been intentional. Other instances involved situations in which IRS employees violated neither the law nor a regulation, but used discretionary enforcement power in a way that appeared to unnecessarily create a financial or other hardship for the taxpayers. Still others involved IRS computer system problems that engaged taxpayers in lengthy efforts to resolve their tax problems, leaving them with the perception that they were abused by IRS. The following sections of this report discuss (1) the need for better information to aid in protecting taxpayers’ rights and (2) the specific areas where we believe IRS’ controls can be strengthened. Although IRS collects data on taxpayer complaints, it has neither a definition of nor management information for tracking and measuring taxpayer abuse. As a result, IRS is unable to determine the nature and extent of abuse by its employees or systems, and whether existing controls need to be strengthened. A specific definition of taxpayer abuse is essential to provide a basis for collecting consistent information about it and to assist IRS staff in identifying abuse when it occurs and preventing its reoccurrence. IRS has several management information systems that collect data on taxpayer complaints. Complaints handled by IRS’ Problem Resolution Program or investigated by its Internal Security Division are entered into their respective management information systems. IRS’ Labor Relations Division also has a management information system that includes the results of investigations of IRS employees and indicates any disciplinary actions taken against them, including those investigations that may have originated from taxpayer complaints. Each of these management information systems uses codes to track and measure various issues considered important to the respective offices, but none of them has a specific code for taxpayer abuse. For example, the Labor Relations system tracks such issues as criminal misconduct and misuse of authority by IRS employees. In some instances these particular issues may involve taxpayer abuse, but in other instances they do not. We found similar situations with both the Problem Resolution Program and Internal Security management information systems. Without a definition of taxpayer abuse and specific codes related to that definition, these systems are not currently able to record incidents of abuse to track their nature and extent. To better ensure that violations of taxpayers’ rights are minimized, we believe that IRS should establish a service-wide definition of taxpayer abuse and then identify and gather management information to systematically track its nature and extent. Although this may require IRS to modify some of its existing data bases, we believe that this can be accomplished without requiring additional appropriations. IRS is currently involved in an effort to develop broad-based performance indicators to allow top IRS, Treasury, other administration officials, Congress, and the public to better assess its performance in key areas. Developing the information needed to assess performance in controlling taxpayer abuse would seem to fit well into that effort. Taxpayer surveys IRS has conducted in recent years are another potential source of information about taxpayer abuse. As discussed in appendix II, these surveys have collected information from taxpayers about their views on how they were treated by IRS representatives. These surveys have not, however, included questions designed to identify possible abusive incidents for further analysis. Once IRS has defined and is systematically tracking abuse, these types of surveys could be used as another indicator of IRS’ progress. Public Law, Treasury Directives, and Internal Revenue Manual guidelines require that IRS protect the integrity, availability, and privacy of taxpayer information in its computer systems. Consequently, IRS employees are prohibited from obtaining access to taxpayer accounts without authorization. The Integrated Data Retrieval System (IDRS) is IRS’ primary computer system for accessing and adjusting taxpayer accounts. Authorized IRS staff obtain access to taxpayer information through IDRS terminals located at the service centers and the regional and district offices. There are approximately 56,000 staff nationwide authorized to use IDRS. Eventually, IRS plans to replace IDRS as part of its TSM initiative. According to IRS, under the new system, users will be able to obtain more taxpayer information than they can through IDRS. IRS has procedures and controls in place to aid in preventing and detecting unauthorized access and use of taxpayer information contained in IDRS. Specifically, each IDRS user is given a unique password that allows access to the system. Users are also assigned a profile of command codes—codes that, among other things, enable users to make changes in taxpayers’ accounts—based on the user’s job requirements. The profile limits the user to only those command codes needed to do his or her job effectively. IDRS also provides a means to identify all employees who access taxpayer accounts, as IDRS records each employee access of taxpayer information in a daily audit trail. IRS can search these audit trails to investigate specific allegations of unauthorized access, as well as to look for patterns of use that could indicate unauthorized access. In addition, IDRS automatically generates security reports when employees access their own accounts, their spouses’ accounts, or the accounts of other employees. Each IRS office has security personnel who are responsible for monitoring all IDRS activities, including monitoring security reports, adding and removing IDRS users, and assigning profiles for IDRS users. We learned through discussions with IRS Internal Audit staff and a review of an October 1992 Internal Audit report that these controls and procedures provide IRS with limited capability to (1) prevent employees from unauthorized access to taxpayers’ accounts and (2) detect an unauthorized access once it occurs. Even though IRS employees can access IDRS only with a password, once in the system, they cannot be prevented from accessing the account of any taxpayer living within their service center area. Furthermore, even though IDRS records every employee access of IDRS in its daily audit trail, these audit trails are so voluminous and detailed that they cannot be used efficiently to identify inappropriate access and misuse of IDRS information. In addition to these weaknesses, the security reports monitored by security personnel are not adequate to help them identify potential browsing, disclosure, or other integrity problems. Finally, according to the Internal Audit report, “. . . the IDRS Security Handbook and related training materials do not provide proper guidance to security personnel on how to detect potential employee misuse of IDRS.” In one of our examples of alleged abuse, an IRS employee, after a personal dispute with a contractor, gained access to the contractor’s account without authorization. The employee then allegedly used this information to threaten the contractor with enforcement action in an effort to favorably resolve the dispute. Because of the weaknesses in IDRS security as described above, the unauthorized access to the contractor’s account described in this example would not automatically have been detected by security personnel. Rather, it was only because the taxpayer complained that IRS management was made aware of this specific instance of taxpayer abuse. IRS management is aware of its overall problems with IDRS security because of the Internal Audit report mentioned above. According to the report, 368 IRS employees in one region had used IDRS to gain access to nonwork-related taxpayer accounts, including those of friends, relatives, neighbors, and celebrities. In most instances, the access did not result in changes to taxpayer’s accounts, but rather enabled the IRS employees to merely view the taxpayer’s account information. Ultimately, information on 79 employees was referred to Internal Security for investigation of potential criminal violations. Internal Security determined that six employees prepared fraudulent returns for taxpayers and then monitored the accounts on IDRS. The actions of some of these employees are being reviewed by the appropriate U.S. Attorney for potential criminal prosecution. On the basis of these findings, Internal Audit recommended that IRS management take actions to strengthen existing IDRS security controls. Internal Audit recommended seven steps to enhance security controls over IDRS, one of which was to ensure that the security system for TSM will have similar controls to those recommended for the current IDRS security system. We also discussed these problems in a September 1993 report that recommended several actions IRS needs to take to strengthen its general controls over computerized information systems. We and IRS are continuing to study ways to solve these problems. IRS is currently working on a program to help detect unauthorized access to IDRS. Specifically, the goal is to implement standardized IDRS reviews periodically in each service center. To prevent unauthorized access to taxpayer accounts, IRS wants to limit some employees’ access to only specified accounts authorized by a manager for official purposes. IRS has also indicated that it plans to build security controls to minimize unauthorized access of taxpayer information into the system that will eventually replace IDRS. Although IRS has yet to develop a cost/benefit analysis for these security controls, IRS officials said that the cost of these controls will be included in future requests for TSM appropriations. When selecting taxpayers’ returns for examination, IRS often uses computer-generated lists to identify returns with examination potential. However, because computer-aided selection techniques rely solely on information in filed returns, IRS collects information from outside sources to identify other areas of potential taxpayer noncompliance. Information Gathering Projects (IGP) are one technique that IRS uses to collect outside information and to identify returns with examination potential. In fiscal years 1990 and 1991, district office examinations of individual taxpayers resulting from IGPs were about 4.5 percent of the total of such examinations. An IGP is a study or survey undertaken to identify noncompliance with the tax laws. It usually involves a limited number of taxpayers within such categories as an occupation, an industry, a geographic area, or a specific economic activity. IRS requires that an IGP be authorized by a district director or higher level management official for a specified length of time during which specific tax-related information is to be collected from third party sources. Once authorized, IGPs normally include an information gathering phase and an examination phase. During the information gathering phase, a project team—revenue agents and a project coordinator—collect and analyze information on a particular group of taxpayers. On the basis of this analysis, the project team will identify tax returns that have potential for tax changes and therefore should be examined during the project. Examination staff then review the returns to identify those with the greatest potential for tax changes. The returns selected will then be sent to an examination group designated to conduct the examinations. Although IRS procedures provide general guidelines for identifying, approving, initiating, and coordinating IGPs, the controls and procedures are not adequate to prevent examination staff from selectively targeting individual taxpayers for examination. For example, although IRS requires project coordinators to develop general work plans for each IGP, there is no requirement in IRS’ procedures that specific criteria be established for selecting tax returns to be examined during the project. Furthermore, IRS’ procedures do not require a separation of duties—a key examination control against potential abuse—between project staff responsible for identifying potential returns to be included in the project and staff responsible for selecting the tax returns to be examined. As a result, an examination employee working on the project could be involved in (1) the project’s information gathering phase, which results in the selection of a group of tax returns that have potential for tax changes and (2) selecting those returns from that group believed to have the greatest potential for tax changes, which will be examined. This makes it possible for such an employee to selectively target an individual taxpayer for examination during the project. In one of our examples, a revenue agent working on an IGP included the returns of two taxpayers for examination against whom the revenue agent had initiated legal action stemming from a personal business dispute. IRS is currently implementing Compliance 2000, an initiative designed to increase taxpayer compliance by (1) identifying market segments believed to be in noncompliance, (2) determining the reasons for such noncompliance, and (3) improving taxpayer compliance using assistance and education methods before initiating more traditional enforcement methods. According to IRS officials, as IRS implements Compliance 2000, it will likely increase the use of special enforcement projects and, therefore, increase the number of returns selected for examination using locally-derived and possibly subjective criteria, such as those used during IGPs. To help ensure that taxpayers are not improperly targeted for examination by IRS employees during IGPs, we believe that IRS should revise its guidelines to require that specific criteria be established for selecting taxpayers’ returns to be examined during these projects. We also believe there should be a separation of duties between project staff who identify returns with potential for tax changes, and staff who select the returns to be examined. Since these are basically procedural changes, we do not believe that IRS would incur substantial costs in implementing them. IRS officials told us that IRS prefers that taxpayers settle their tax bills with a check or money order. However, IRS is required by law to accept cash if a taxpayer insists on this method of payment. When a taxpayer pays with cash, an IRS collection employee is required to provide the taxpayer with a cash receipt—IRS Form 809. At the end of each day, collection support staff are to process the payments and reconcile all Form 809 receipts they receive with daily collection activity reports submitted to them by collection staff. In addition to the daily reconciliation, collection managers are to do an annual reconciliation of all Form 809 receipts issued to collection staff to ensure that all receipts are accounted for. Any discrepancies noted during either the daily or annual reconciliations are to be discussed by the appropriate collection employee and his or her supervisor. We found that IRS did not consistently mention its preference for tax payments by check or money order in its forms, notices, and publications. For example, IRS Publication 594 “Understanding the Collection Process” says that taxpayers must receive an IRS Form 809 receipt for cash payments to the IRS, but does not say that IRS prefers either a check or money order. We also found that the controls to prevent IRS employees from embezzling taxpayers’ cash payments relied to a great extent on employee integrity and taxpayer complaints. Although Form 809 receipts provided to taxpayers are to be reconciled with daily collection reports, there are no management reviews of all Form 809 receipts other than the annual reconciliation. As a result, if a collection employee embezzled a taxpayer’s cash payment and the embezzlement was not detected through the daily reconciliation, IRS might not detect this until the next annual reconciliation. In the interim, IRS relies on taxpayer complaints to identify when employees embezzle taxpayers’ cash remittances. In one of our examples, we found that a taxpayer complained to IRS that her bank account was levied after she fully paid her tax liability with cash. Internal Security investigated her complaint and determined that the IRS collection employee whom she paid had embezzled most of her cash payment by altering the amount on the cash receipt he submitted to the collection support staff. This employee also embezzled other taxpayers’ cash payments for which he had not submitted any cash receipts. Unfortunately for the taxpayer in this example, the situation was not detected until the taxpayer complained about the erroneous bank account levy made by IRS. Reconciling outstanding cash receipts more often may have detected this problem before the taxpayer was subjected to the additional IRS collection action. To better protect against possible embezzlement of cash payments, we believe that IRS should reconcile all outstanding Form 809 cash receipts more often than once a year. We also believe that IRS should consistently stress in its forms, notices, and publications that taxpayers should use checks or money orders whenever possible, rather than cash to pay their tax bills. In our view, IRS could implement these changes at minimal cost, as they are basically procedural changes and modifications to existing forms and publications. When businesses fail to collect or pay withheld income, employment, or excise taxes, IRS may assess a trust fund recovery penalty against the responsible officers and employees. This penalty amounts to 100 percent of the unpaid taxes. IRS may also charge interest from the date the penalty was assessed. In determining who should be assessed the penalty, IRS is required to show that the employee being assessed was responsible for and willfully failed to collect or pay the taxes to IRS. Although IRS may assess the penalty against all responsible officers and employees, it is to collect only the amount of tax owed. That is, if taxes owed amount to $100, IRS may hold various company officials responsible, but it is to collect no more than $100 (plus interest) in total from these officials. We reported on IRS’ process for collecting 100-percent penalties in August 1989. Relatively large trust fund recovery penalties have caused financial hardships for the individuals involved. Some individuals have complained that they were wrongfully assessed the penalty and then required by IRS to show why they were not liable for the penalty. In one of the cases we reviewed, a bookkeeper for a company that had declared bankruptcy was assessed penalties and interest on the business’s unpaid taxes. After long and exhaustive proceedings, the state tax agency determined that the bookkeeper was not an operating officer and did not owe the state penalty. Nonetheless, IRS continued to pursue the bookkeeper for payment of the federal penalty. Six months later, with the help of his Congressman, the bookkeeper convinced IRS that he was not responsible for paying the trust fund taxes. Some responsible employees may not be aware that they could be assessed the penalty if they fail to ensure that the taxes are paid to IRS. Moreover, under current law—Internal Revenue Code Section 6103—IRS is prohibited from disclosing to a responsible person the names of other responsible persons held liable for the penalty and the general nature of collection actions taken against them. IRS has recognized weaknesses in its controls and procedures for identifying the responsible person for this type of penalty. As a result, IRS instituted policy changes aimed at ensuring that responsibility for paying the penalty remained with the responsible person. The revised policy requires IRS managers to ensure that their staffs conduct quality investigations to identify responsible persons and prove willful intent. Taxpayer rights legislation introduced in Congress in 1992 and 1993 contained provisions that, if enacted, would assist individuals in getting information about the trust fund recovery penalty. The bills would require IRS to increase awareness of the penalty through special information packets and printed warnings on tax documents. The bills would also allow each individual assessed the penalty to find out from IRS the names of others against whom IRS had assessed the penalty. Also, the bills would allow these assessed individuals to find out the nature of any collection actions being taken against the other assessed individuals so that all involved parties would have complete information with which to deal with IRS and each other. We support the intent of this provision of the proposed legislation. To help responsible officials and employees become more aware of their responsibilities to collect and forward trust fund taxes to IRS, we believe that IRS should provide better information about their responsibilities and the penalty for failure to meet these responsibilities by providing special information packets. IRS is already implementing changes to its trust fund recovery penalty assessment process, which will remedy some of these problems. As a result, we do not believe that IRS would incur significant costs to implement the additional changes. We found examples of situations in which taxpayers repeatedly received tax deficiency notices and payment demands despite continual contacts with IRS over a period of months and even years in an attempt to resolve problems with their accounts. IRS’ inability to correct the underlying problems in such situations resulted in taxpayers feeling frustrated. In these instances, although no IRS employee appeared to have intentionally abused them, the taxpayers’ correspondence with IRS indicated they felt they were abused by the “tax system.” In one instance, a taxpayer required intervention from her Senator to prevent IRS taking more than $50,000 to pay for taxes on a sale of property that the taxpayer had not owned or sold. The problem arose because two taxpayers had the same social security number and the same name. Initially, IRS released the levy it had placed on the taxpayer’s salary to allow her time to prove that she was not the seller of the property. Although the taxpayer tried to resolve the problem by obtaining a letter from the Social Security Administration explaining the problem with the duplicate social security number and same name, IRS would not accept the letter as proof of who sold the property. The taxpayer’s efforts to resolve the problem by working with the bank that had handled the property sale also failed. Finally, the taxpayer contacted her Senator and eventually was able to get the levy released. In another instance, a taxpayer who promptly paid an additional tax assessment in early 1991 got help from his Senator to get IRS to acknowledge that he had paid his assessment in a timely manner. Soon after the taxpayer sent his payment to IRS, it sent the taxpayer a check in an amount very close to the amount he had originally sent IRS. Later, IRS wrote the taxpayer, asking payment for the original tax assessment and adding a penalty for late payment. Correspondence continued for months back and forth between the taxpayer and IRS. Finally, in early 1992, nearly a year after the taxpayer had made his payment, the matter was resolved with IRS noting that the problem occurred because the taxpayer’s payment was posted to his account before the additional tax assessment had been recorded. A more general type of problem affects divorced or separated spouses. Divorced or separated taxpayers who had previously filed joint returns may subsequently be assessed a tax deficiency. In these instances, IRS’ procedure is to send notices of deficiency to the last known address of the spouse whose name and social security number appeared first on the joint return. Once enforcement action begins, the other spouse may be subjected to such actions as a levy on his or her salary without having been informed by IRS of the tax delinquency. IRS’ procedures require that duplicate notices of deficiency be sent by certified or registered mail to each spouse, if the spouses notify IRS that separate residences have been established. However, IRS’ computer system is not capable of searching taxpayer files each time a notice of deficiency is issued for a joint return to determine whether spouses have subsequently filed separate returns with new addresses or otherwise provided separate addresses. IRS Problem Resolution Program officials in IRS’ Southeast Region told us they frequently became involved in situations where a separated or divorced taxpayer, typically a woman, says that the first notice she received for a joint return deficiency was a notice of lien or levy on her property. In a February 1992 congressional hearing on S. 2239, Taxpayer Bill of Rights 2, Treasury’s Assistant Secretary for Tax Policy said that IRS would begin sending a notice of deficiency to both parties in such situations “. . . as soon as modernization of its computer system makes it feasible to do so.” More recently, IRS Problem Resolution staff told us that IRS’ TSM program will improve existing computer capabilities and make it possible for IRS to begin providing notices to both parties. The three examples discussed above, and others we have reviewed, have the common thread of occurring and continuing primarily because of information handling problems. We believe that IRS’ implementation of the various elements of TSM, together with IRS’ emphasis on improving operations and providing better service to taxpayers, should go a long way toward eliminating these types of problems. With adequate controls to guard against misuse, TSM should make taxpayer information more accurate and more readily available to IRS employees and, consequently, should increase IRS’ ability to help taxpayers resolve their problems. However, TSM is a massive, long-term effort, extending into the next century, so it may be some time before the technological capability to resolve these problems is in place. Given that, we believe IRS needs to do as much as it can to identify possible interim solutions and to assure that TSM deals with these problems. First, IRS can systematically identify, inventory, and categorize the various kinds of information handling problems that lead to taxpayer frustration and perceptions of abuse. Analysis of these data in connection with IRS’ operational improvement efforts may help identify some short term remedies. Second, IRS can use the data in its current operational improvement effort to define TSM business requirements to make sure that TSM has the capabilities needed to deal with these types of problems. We recently testified about the need for IRS to define its business requirements for TSM in detail. Carrying out these steps would require some analytical resources but, since the steps are consistent with TSM and operational improvement efforts already underway, we do not believe substantial incremental costs would be incurred. IRS controls for dealing with third party contacts that provide information on possible tax violations call for the information to be referred to the appropriate IRS unit for evaluation as to what action, if any, to take. For example, if someone contacts IRS with information that a taxpayer has not reported a substantial amount of his or her income and suggests that an audit could be warranted, that information would be referred to the Examination Division in the IRS field office that has jurisdiction. Examination staff would then evaluate the information for credibility and specificity, including reviewing the taxpayer’s return—assuming one was filed—to see if there were indications of underreporting as part of the decision on whether to examine the taxpayer’s return. Since IRS’ National Office is prohibited from initiating an examination, field office managers make final decisions in such cases. IRS has specific procedures to handle requests from the White House for matters such as preparing tax check reports on prospective appointees, but there are no specific procedures to handle a White House contact offering information about potential tax violations. According to IRS officials, such information would be handled in the same manner as any other third party communication in that it would be evaluated for potential tax examination and/or criminal investigation purposes by Examination Division or Criminal Investigation Division staff. In May 1993, the White House announced that seven employees of the White House Travel Office had been fired because of concerns about the office’s management and financial integrity. (These and related issues are discussed in detail in our report entitled White House Travel Office Operations (GAO/GGD-94-132, May 2, 1994). Soon after, related allegations arose that the White House and/or the FBI made improper contacts with IRS, resulting in improper IRS contacts with a taxpayer. These allegations have been reviewed by three organizations. A White House team, led by the former Chief of Staff to the President, reported that there was no evidence of White House contact with IRS in connection with the Travel Office issue. The IRS Inspection Service investigated the allegations involving IRS and concluded that no White House contact had been made with IRS concerning this matter and that IRS employees had carried out their duties properly. Although IRS released a heavily edited copy of its report, most of the report cannot be made public because it contains tax return information protected from disclosure by section 6103 of the Internal Revenue Code and the taxpayer declined to grant a waiver from this provision of the law so IRS could comment publicly on this matter. At the request of a Member of Congress, the Office of Inspector General (OIG), Department of the Treasury, also investigated the allegations involving IRS. The OIG report was issued on March 31, 1994. The OIG, in its report, also concluded that the White House had not contacted IRS about the Travel Office matter and that it found no evidence of taxpayer abuse by IRS employees. Disclosure of tax return information in the OIG’s report also was limited by section 6103. We reviewed the three reports and supporting documentation and discussed their findings with representatives of the three organizations. We also interviewed key White House, IRS, and FBI personnel involved in the events leading up to the allegations of abuse by IRS. Finally, we interviewed representatives of the taxpayer involved. On the basis of our review, we believe that (1) neither the White House nor the FBI made improper contact with IRS, (2) IRS employees carried out their duties properly and in accordance with IRS guidelines and procedures, and (3) abuse did not occur. Section 6103 provides us with access to tax return information to enable us to carry out our work, but it also limits the information we may disclose. Thus, we are not able to provide the details of our review in this report. In July 1993, the White House Counsel issued guidance to White House staff on contacts with the FBI and the IRS, which supplemented guidelines issued earlier in the year. The July guidelines stated that “It is never appropriate for White House personnel to initiate an investigation or audit by directly contacting the Internal Revenue Service.” The guidelines further provided that any information about possible violations of law or wrongful activities were to be communicated by White House staff to the Counsel to the President, who would decide whether the information should be provided to senior Justice or Treasury Department officials. As noted above, IRS has specific procedures for handling White House contacts about tax checks for appointees and for other administrative matters, and general procedures for handling third-party contacts from any source offering information that may lead to examinations or investigations. IRS does not, however, have specific procedures to deal with a White House contact offering information about possible tax violations. We emphasize that we found no evidence of taxpayer abuse in this situation. However, we believe IRS can expand its procedures by adding guidance to its employees on how to handle White House contacts other than those involving tax checks and routine administrative matters. Developing and issuing such guidance should not impose any significant incremental costs on IRS. IRS has a wide range of controls, processes, and oversight offices designed to govern how its employees interact with taxpayers. While this “system” of controls has many elements designed to protect taxpayers from abuse, including IRS’ initiatives and numerous protections provided by law, it lacks the key element of timely and accurate information about when, where, how often, and under what circumstances taxpayer abuse occurs. This information would greatly enhance IRS’ ability to pull together its various efforts to deal with abuse into a more effective system for minimizing it. The information would also be valuable to Congress and taxpayers in general in assessing IRS’ progress in treating taxpayers as customers—an often cited IRS goal. Therefore, we believe IRS should define taxpayer abuse and develop the management information needed to identify its nature and extent. In addition, we believe IRS can strengthen its controls in several specific areas and provide additional information to taxpayers that will increase their ability to protect their rights. Specifically, we believe IRS can (1) ensure that the information systems now being developed under its TSM initiative include the capability to minimize unauthorized access to taxpayer information, (2) clarify its guidelines for selecting tax returns during IGPs, (3) reconcile its cash receipts more often and encourage taxpayers to avoid using cash whenever possible in making payments to IRS, (4) provide individuals who may be subject to trust fund recovery penalties with more information about their responsibilities, (5) attempt to identify short-term remedies to minimize the problems caused taxpayers by IRS’ information handling weaknesses and ensure that the TSM program includes requirements designed to solve those problems as the new information systems are implemented over the next several years, and (6) develop specific guidance for IRS employees on how they are to handle White House contacts. Finally, we believe that legislation is needed to provide IRS with authority to disclose information to all responsible officers involved in IRS efforts to collect a trust fund recovery penalty. This authority was included in legislation titled Taxpayer Bill of Rights 2, (S. 542 and H.R. 22) introduced in the 103rd Congress. We do not believe that Congress needs to provide additional appropriations to enable IRS to implement these recommendations, with one possible exception. Although additional funding may be needed so that IRS can deal with the information management problems discussed in this report as it proceeds with the TSM program, IRS does not know the amount of funds that will be needed because it has yet to decide on specific requirements and develop a cost/benefit analysis for these requirements. Any funding needed should be included in budget requests for IRS’ TSM program. We believe that the steps we are recommending to correct the remaining problems will not require additional appropriations. To improve IRS’ ability to manage its interactions with taxpayers, we recommend that the Commissioner of Internal Revenue establish a service-wide definition of taxpayer abuse or mistreatment and identify and gather the management information needed to systematically track its nature and extent. To strengthen controls for preventing taxpayer abuse within certain areas of IRS operations, we recommend that the Commissioner of Internal Revenue ensure that TSM provides the capability to minimize unauthorized employee access to taxpayer information in the computer system that eventually replaces IDRS; revise the guidelines for IGPs to require that specific criteria be established for selecting taxpayers’ returns to be examined during each project and to require that there is a separation of duties between staff who identify returns with potential for tax changes and staff who select the returns to be examined; reconcile all outstanding Form 809 cash receipts more often than once a year, and stress in forms, notices, and publications that taxpayers should use checks or money orders whenever possible to pay their tax bills, rather than cash; better inform taxpayers about their responsibility and potential liability for the trust fund recovery penalty by providing taxpayers with special information packets; seek ways to alleviate taxpayers’ frustration in the short-term by analyzing the most prevalent kinds of information-handling problems and ensuring that requirements now being developed for TSM information systems provide for long-term solutions to those problems; and provide specific guidance for IRS employees on how they should handle White House contacts other than those involving tax checks of potential appointees or routine administrative matters. To better enable taxpayers and IRS to resolve trust fund liabilities, we recommend that Congress amend the Internal Revenue Code to allow IRS to provide information to all responsible officers regarding its efforts to collect the trust fund recovery penalty from other responsible officers. The Acting Commissioner of Internal Revenue commented on a draft of this report by letter dated August 26, 1994. (See app. V.) We also discussed the draft report several times with IRS officials. Our evaluation of IRS’ written comments on our proposed recommendations in the draft report follows. IRS disagreed with our recommendation that it establish a definition of taxpayer abuse and identify and gather the information needed to systematically track the nature and extent of such incidents. IRS said use of the term “taxpayer abuse” was misleading, inaccurate, and inflammatory; disagreed with parts of the definition of abuse used in our study; challenged the assumption that there was any need to collect additional information about abuse because its existing systems already identify and gather sufficient information to track and manage cases of improper treatment of taxpayers; suggested that our methodology was flawed because it did not show a statistically significant frequency of abuse; and asserted that the problem, to the extent it exists, was well under control. In summary, IRS said that the problem of taxpayer abuse, to the extent that it exists, is best defined, monitored, and corrected within the context of its definitions and current management information systems. Consequently, IRS planned no action on our recommendation. IRS’ disagreement with our definition of taxpayer abuse centered on two of the three components we used to define this issue in the absence of an IRS definition. While agreeing that taxpayers can be abused when IRS employees violate laws, regulations, or rules of conduct, IRS did not agree that harm resulting from employees aggressively applying discretionary enforcement power or information system breakdowns constituted taxpayer abuse. We believe that it is commendable when IRS employees aggressively respond to taxpayers who do not comply with the tax laws, particularly if the noncompliance appears to be intentional. However, we noted instances when taxpayers who may not have complied because they did not understand the tax laws also received aggressive—perhaps overly aggressive—treatment by IRS employees. Throughout our study, it was our intent to focus on these latter instances. We have clarified our definition to explicitly specify unnecessarily aggressive application of discretionary enforcement power. We also noted instances when taxpayers were thoroughly frustrated due to the time and cost they had to expend in order to resolve misunderstandings resulting from IRS information handling problems. In both types of situations, we can understand why taxpayers would feel abused by IRS even though there was no violation of laws, regulations, or rules of conduct. Another area in which we and IRS disagree is whether mistreatment of taxpayers, whatever its frequency and whether intentional or not, is an issue of sufficient significance to merit specific management attention based on systematic information gathering, reporting, and tracking over time. IRS clearly believes it is not unless it can be shown that the problem is statistically significant relative to the total number of IRS contacts with the public. IRS argues in its comments that (1) our study did not show that abuse, as we defined it, occurred with statistically verifiable frequency; and (2) other IRS information gathering activities give IRS management sufficient information to track these situations. In other words, IRS said that we have not shown that there is a significant problem, but if there is, IRS believes it has all the information needed to deal with it. We believe the issue of taxpayer mistreatment deserves attention, not because we found it to occur frequently, but because we could not determine how frequently it occurs, and neither can IRS without modifying its existing management information systems. More fundamentally, we believe the issue inherently deserves attention. Congress has provided IRS with broad powers to carry out demanding and difficult responsibilities, but Congress also continues to be concerned about protecting taxpayers from arbitrary or overzealous IRS employees and from administrative systems that sometimes go awry. It does not seem unreasonable to us that IRS should have information available about such incidents for its own use in working to strengthen preventative measures and to be able to report periodically on the issue. It is true that our study does not present a statistical analysis of the incidence of abuse. That is the point. We say early in our report that IRS does not have the information readily available to estimate the frequency of such incidents. Our concern is not that we found a high—or low—frequency of abuse. Our concern is that the information needed to allow either us or IRS to determine the frequency of such incidents and to assess the effectiveness of IRS’ controls to prevent such incidents over time is not presently available. We agree, and our draft report recognized, that IRS has numerous information gathering efforts that collect a great deal of information related to the mistreatment of taxpayers. These include an attempt to measure taxpayer burden, defined as time, cost, and dissatisfaction, through such means as an annual report to the tax committees and periodic customer surveys. We do not agree, however, that these efforts and the management information derived from them, as presently structured, allow IRS to adequately measure and track incidents of taxpayer mistreatment. IRS says, for example, that it has in place definitions and an information system to track and manage cases where IRS employees have violated a law, regulation, or the Office of Government Ethics’ Standards of Ethical Conduct for Employees of the Executive Branch. This system contains information on all cases investigated by IRS’ Internal Security Division, ranging from allegations of violating travel regulations to accepting bribes. While we were able to select some cases out of the system that met our study definition of taxpayer abuse, we found it extremely time consuming and cumbersome because the system is structured to identify employee violations of policies and procedures, rather than to identify cases of abuse or taxpayer mistreatment from the taxpayer’s perspective. In any event, IRS has no definition of taxpayer perception of mistreatment or abuse and the system has no code or category to identify such cases. As a result, although the cases that are entered in this system may involve taxpayer mistreatment, at present no reporting or tracking of such cases can occur. In summary, IRS believes it has adequate information to deal with what it believes are rare instances of taxpayer mistreatment. We do not agree that IRS has adequate information for the reasons noted above. We believe, however, that IRS could readily develop adequate information from its existing management information systems by developing a definition of “taxpayer mistreatment,” or such other term as IRS chooses, and modifying one or more of its present systems to identify incidents with the characteristics called for by the definition. Similarly, IRS could develop questions for use in its customer surveys to serve as indicators of the frequency of taxpayer mistreatment and progress in preventing it. We believe IRS should reconsider its decision not to implement this recommendation. IRS disagreed with a recommendation we made in a draft of this report that it revise its Rules of Conduct to deal with situations that can arise when IRS employees have dealings with taxpayers with whom the employees have recently completed an examination, investigation, or collection enforcement action. IRS said that it believed the Office of Government Ethics’ Standards of Ethical Conduct for Employees of the Executive Branch—which superseded IRS’ and other agencies’ Rules of Conduct—are sufficient to address the issues involved. On the basis of our discussions with IRS ethics officials and Office of Government Ethics officials, we agree and have dropped this recommendation and related material from our final report. IRS’ comments on our other recommendations and our recommendation to Congress, along with our evaluation, are briefly summarized below. IRS agreed with our recommendation to provide the capability to minimize unauthorized employee access to taxpayer information in the new computer systems now being developed. IRS summarized several of the security and privacy capabilities these systems are to provide. In response to our recommendation to revise the guidelines for IGPs, IRS said it would issue a memorandum to the field updating a similar memorandum issued on September 21, 1989. IRS said the guidance would, among other things, address the need for (1) establishing criteria for selecting returns to be examined and (2) for separating duties of employees who identify returns to be included in the project from those who select the specific returns to be examined. While this may serve to temporarily heighten field staff awareness of the importance of this issue, we believe that including such guidance in the Internal Revenue Manual would result in a more permanent emphasis on this issue in light of the potential for greater use of IGPs under Compliance 2000. IRS agreed with our recommendation to reconcile cash receipts more often than once a year and said it would consider doing random and unannounced reconciliations in addition to the annual reconciliations. We believe this is an excellent approach. IRS said that it supported the other part of this recommendation calling for it to emphasize in forms, notices, and publications that taxpayers should, whenever possible, pay their tax bills with checks or money orders instead of cash. In response to our recommendation that IRS better inform taxpayers about their responsibility and potential liability for trust fund recovery penalties, IRS said that it had already done a great deal in this area, including placing warnings on tax deposit coupons, on almost 30 forms, and in publications used by business taxpayers, and does not plan future changes in the coupons because it is moving away from the paper coupons and encouraging electronic payments. IRS did say it would consider using special information packets or taxpayer education materials for small businesses to alert taxpayers to this problem. In response to our recommendation that IRS seek ways to alleviate information-handling problems that frustrate taxpayers, IRS said it continually does this as it gathers data through Quality Review Programs. IRS said that as it moves into TSM’s Document Processing System, the capture of images of returns and other tax documents will improve communications with taxpayers. IRS also said that the Taxpayer Ombudsman’s Problem Resolution Program provides recommendations to the Tax Systems Modernization Program for ways to alleviate systemic problems that cause problems for taxpayers. IRS disagreed with our recommendation that it provide guidance for IRS employees on how they should handle White House contacts, other than those involving tax checks of potential appointees or routine administrative matters. IRS said that its current procedures regarding third-party contacts who provide information that could lead to an audit or investigation are adequate to cover any contacts from the White House. Those procedures essentially call for IRS field office personnel to evaluate the information provided and decide if an audit or investigation is warranted. We continue to believe that IRS and taxpayers would be better served by specific, tailored guidance on this topic. Retaining only the current procedures for all third-party contacts will allow IRS employees to (1) accept any information from any White House staffer suggesting that an IRS audit or investigation be done, whether or not the information was received through the senior level channels prescribed by the White House guidance to its employees and (2) allow that information to be evaluated and a decision made as to whether to conduct an audit or investigation by a relatively low-level IRS employee. IRS supported our recommendation to Congress calling for amending the Internal Revenue Code to allow IRS to inform all of the responsible officers in a business about IRS’ efforts to collect a trust fund recovery penalty from other responsible officers. As agreed with the Subcommittee, we will send copies of this report to other interested congressional committees, the Secretary of the Treasury, the Commissioner of Internal Revenue, and other interested parties. Copies will be made available to others upon request. The major contributors to this report are listed in appendix VI. If you have any questions, please call me at (202) 512-5407. The Subcommittee on Treasury, Postal Service and General Government, House Committee on Appropriations, asked us to determine if IRS has adequate controls and procedures to prevent IRS from abusing taxpayers’ rights. To attempt this determination, we identified various examples of potential taxpayer abuse that were of concern to the public, Congress, and the media. From these examples, we developed a range of taxpayer abuse issues for which we examined IRS’ procedures, guidelines, and management oversight to determine if these controls appeared adequate to protect taxpayers from abuse by IRS employees, procedures, or systems. At the outset of our review, we found that IRS had no definition of taxpayer abuse. We discussed the topic of taxpayer abuse with managers of various IRS offices, including the Collection, Examination, and Criminal Investigation Divisions; the Inspection Service; and the Problem Resolution Office. Although some managers offered their opinions as to what situations might be considered “abusive,” none was aware of any specific IRS definition of taxpayer abuse. To get other perspectives on the issue, we contacted a number of groups representing both tax practitioners and taxpayers. These groups included the American Bar Association, the American Institute of Certified Public Accountants, the Tax Executive Institute, the Federation of Tax Administrators, and the National Coalition of IRS Whistleblowers. As was the case with IRS managers, the officials from these groups did not have a standard definition of taxpayer abuse. However, they raised a number of concerns, centering not only on what they believed to be specific instances of IRS employees’ excessive use of discretionary enforcement power, but also on IRS’ systemic problems, which they felt caused harm to taxpayers in general and which we believe could be perceived by taxpayers to be abusive. To assist our data collection efforts regarding taxpayer abuse, we developed a working definition of abuse that encompassed a broad range of situations that were potentially harmful to taxpayers. We defined abuse from the taxpayers’ viewpoint, rather than from IRS’ viewpoint. We then listed various issues related to specific examples of potential abuse that we identified by reviewing recent congressional hearings and reports, newspaper and magazine articles, IRS Problem Resolution Office files, IRS district office and service center congressional correspondence files, and IRS Internal Audit and Internal Security files and reports. Our working definition of taxpayer abuse had three parts that described general categories of potential taxpayer abuse on the part of IRS and its employees. The three categories, as well as related issues of taxpayer abuse, were as follows: An IRS employee is alleged to have violated a law, regulation, or the IRS rules of conduct, resulting in possible harm to a taxpayer; a related issue is the use of discretionary enforcement power for personal reasons. An IRS employee aggressively uses discretionary enforcement power in such a way that a taxpayer perceives that he or she is harmed, as does the media, Congress, or the general public; related issues include the use of enforcement power against certain persons who, although not directly responsible for a failure to pay a tax liability, may be technically liable for the tax, such as when an innocent spouse is assessed a joint tax liability or when a company employee is assessed a trust fund recovery penalty. An IRS computer system fails in such a way that a taxpayer perceives that he or she is abused, as does the media, Congress, or the general public; a related issue is the use of discretionary enforcement power against a taxpayer because the IRS has mistakenly assessed the taxpayer for a debt that the taxpayer does not owe. Within IRS, in addition to the lack of a service-wide definition of taxpayer abuse, we also learned that IRS does not have specific management information to enable the Service to track and measure abuse. Rather, there are files maintained by various IRS offices that may contain taxpayer complaint information, such as congressional correspondence files maintained at the IRS National Office, district offices, and service centers, and Problem Resolution Office files maintained in IRS’ district offices and service centers. After discussions with IRS officials concerning data sources within the Service that we might use to find examples of potential taxpayer abuse, we decided to review three sources in particular: (1) Problem Resolution Office files maintained at each district office and service center, (2) congressional correspondence files maintained at the National Office and at each district office and service center, and (3) Internal Security investigative case files maintained at the National Office. We judgmentally selected and reviewed 421 fiscal year 1992 Problem Resolution Office files and 201 fiscal year 1992 congressional correspondence files from the field locations shown in table I.1. DO: District office. SC: Service center. In addition, at the National Office we reviewed summaries of all 909 Internal Security investigations closed during fiscal year 1992. From these three sources, we subjectively selected examples of taxpayer complaints that appeared to illustrate various issues within our definition of taxpayer abuse. Initially, we selected 139 examples that we believed indicated potential taxpayer abuse. From those, we further selected 24 that we used as a basis for evaluating IRS’ specific procedures, guidelines, and management oversight to protect against taxpayer abuse. We did the same for two additional potential examples of taxpayer abuse, one we identified in an IRS Internal Audit report, and a second we included because of extensive media coverage and its sensitivity. Although we did not follow up on each individual example to determine whether these taxpayers were actually abused by IRS, we cited them in our discussions with IRS managers to learn about the range of controls in place to prevent this type of taxpayer abuse. Further, our selection of these examples was intended for illustrative purposes only and did not indicate a frequency of occurrence. In our review, we made no attempt to statistically sample the files that we reviewed because they did not solely represent instances of potential taxpayer abuse. For example, we did not include taxpayer complaints concerning delays in receiving refund checks as an instance of taxpayer abuse. Therefore, we were unable to quantify the extent of potential taxpayer abuse by IRS employees. This was due to both the absence of information on the total universe of situations that may have involved taxpayer abuse and the difficulty of finding specific data concerning instances that could conclusively be defined as taxpayer abuse. As noted above, in our discussions with IRS managers, we used the examples we selected from IRS files to determine whether there were controls in place over IRS operations to prevent taxpayer abuse. Thus, we talked with officials knowledgeable about IRS operations, particularly those of the Collection, Examination, and Criminal Investigation Divisions, to determine the specific processes and procedures currently required in their respective enforcement efforts. In so doing, we attempted to get an understanding of the general controls applicable to these separate operations. The examples we selected, in some instances, enabled us to identify weaknesses in IRS’ current controls and procedures. In addition to discussions concerning specific issues and controls, we reviewed documentation related to IRS’ efforts to improve its treatment of taxpayers since we testified on this issue in 1982. We looked at initiatives mandated by Congress, such as the 1988 Taxpayer Bill of Rights, as well as initiatives set forth by IRS in its strategic business plan, such as the Compliance 2000 initiative, in which IRS plans to work closely with taxpayers to aid them in complying with the tax laws. We also reviewed a highly publicized allegation that a taxpayer was abused by IRS because of improper contacts from the White House and FBI. Due to the sensitivity of this allegation, we also looked into IRS’ controls related to contacts by the White House and FBI and determined whether taxpayer abuse actually occurred in this instance. To do this, we discussed the issue of controls with IRS officials and reviewed the related Internal Revenue Manual procedures. We also reviewed a White House Chief of Staff Management Review, an IRS Inspection report and supporting documents, and a Treasury OIG report and supporting workpapers, concerning their respective investigations of the abuse allegations. Finally, we discussed the allegations with officials of the White House, FBI, IRS Inspection Service, Treasury OIG, and representatives of the taxpayer. Because our review overlapped the OIG inquiry, both in terms of the time when the two reviews were being carried out and the issues they addressed, we established a joint working relationship, consistent with the cooperation expected between Inspectors General and GAO under the Inspector General Act of 1978. Through this relationship, we obtained access to the results of and workpapers supporting the OIG’s work, and we provided similar access to pertinent results and workpapers from our work. We relied heavily on OIG workpapers and interviews with OIG staff to corroborate information from IRS’ Inspection Service’s report concerning IRS employees’ actions. We did our work from April 1992 through January 1994 at IRS’ National Office; the North Atlantic and Southeast Regions; the Albany, Atlanta, Brooklyn, and Manhattan Districts; and the Atlanta and Brookhaven Service Centers. We also met with White House and FBI officials and with representatives of a taxpayer involved in one of the examples we reviewed. We did our work in accordance with generally accepted government auditing standards. The Acting Commissioner of Internal Revenue provided written comments on a draft of this report, and those comments are reprinted in appendix V. IRS has many operational controls in place to help govern its interactions with taxpayers that should aid in the prevention of taxpayer abuse. In recent years, IRS has also undertaken various initiatives to help improve how it deals with taxpayers. The key elements of IRS’ approach for preventing taxpayer abuse, such as (1) operational controls governing the actions of IRS’ enforcement functions, (2) processes for handling taxpayer complaints, and (3) offices for overseeing IRS’ operations, as well as recent IRS and congressional initiatives to better ensure that taxpayers are treated fairly in their dealings with IRS, are summarized below. IRS has a wide range of operational controls to govern its primary enforcement activities—examination, collection, and criminal investigation. Among these controls are some that IRS considers crucial in its overall efforts to safeguard taxpayers’ rights and prevent abuse. For example, a key control over examination activities is a separation of duties between IRS staff who identify tax returns with potential for a tax change and staff who conduct the actual tax examination. A key control over collection activities is a series of tax delinquency notices warning of pending enforcement actions that IRS sends to taxpayers before it actually initiates such actions. For criminal investigations, a key control is the required approval by a management official before IRS criminal investigators initiate such investigations. Specific operational controls and procedures are required when a taxpayer’s return is examined by IRS. Before an examination is done, IRS often has used a computer program to identify returns with potential for tax changes. Some of these computer-identified returns are to be automatically examined, such as those resulting in a refund of $200,000 or more. Others, such as those identified by IRS’ Discriminant Function formula, are to be screened by examination classifiers to further determine those with the greatest potential for tax changes. The returns selected through this screening process would be stored in inventory at the service center until requested by a district office examination manager, who would assign them to either a district office tax examiner or revenue agent to conduct the tax examination. Generally, noncomputer-identified returns, such as referrals from other IRS offices and state tax agencies, would also be (1) further screened by examination classifiers to identify those with the greatest potential for tax changes, (2) stored in inventory until requested by district office examination managers, and (3) assigned to be examined by a district office tax examiner or revenue agent. However, we identified some flaws in the controls for IGPs—a particular type of examination activity involving returns not selected by computer. Controls over IGPs are discussed in our report on page 10. When IRS notifies the taxpayer that his or her return will be examined, the taxpayer is to be provided with IRS Publication 1, “Your Rights as a Taxpayer,” describing the taxpayer’s rights related to the examination process. At the start of the examination, IRS examiners are to ask taxpayers if they received Publication 1. IRS Publication 1 informs taxpayers that they have the right to (1) representation, (2) record interviews with IRS personnel, (3) have their personal and financial information kept confidential, (4) receive an explanation of any changes to their taxes, and (5) appeal IRS’ findings through an IRS appeals office or through the court system. The appeals process provides an independent review of IRS examinations and protects against taxpayer abuse by helping to ensure that the taxpayer pays the correct tax. Similar controls and procedures are to be followed when IRS seeks to collect unpaid taxes from taxpayers. For example, IRS is to send taxpayers a series of computer-generated notices before taking any collection enforcement action, thereby enabling taxpayers to voluntarily settle their tax liabilities. IRS also is to send Publication 594, “Understanding the Collection Process,” with its first and last payment delinquency notices. This publication explains taxpayers’ payment alternatives and rights during the collection process, as well as the sequence of enforcement actions that IRS may use if the taxpayers fail to comply. When contacted by IRS collection staff, a taxpayer may seek an installment agreement or submit an offer-in-compromise as alternatives to full payment on demand. If the taxpayer believes that paying the tax would create a hardship, he or she can file an Application for Taxpayer Assistance Order, whereby IRS may agree to allow the taxpayer to defer payment until the taxpayer’s finances improve. If the taxpayer disagrees with the results of IRS’ collection action, he or she may seek an informal administrative review with an IRS manager. Taxpayers who disagree with certain collection actions, such as the assessment of a trust fund recovery penalty, may also pursue a formal appeal through an IRS Regional Director of Appeals or the court system. Various controls and procedures are also to be followed by the IRS when a taxpayer is the subject of an IRS criminal investigation. For example, the investigation is to be based on evidence of a possible criminal violation of the Internal Revenue law and it is to be approved by an IRS manager before it is started. At the first meeting between IRS agents and the taxpayer, IRS agents are required to explain the taxpayer’s rights, including the right to representation. If the taxpayer requests representation, the IRS agents are to terminate the meeting. Once the investigation is completed, IRS is required to notify the taxpayer. If IRS plans to recommend prosecution, the taxpayer may seek a conference with an IRS manager to determine the basis for such a recommendation. Prosecution recommendations are to be reviewed and approved by both the IRS District Counsel and the local U.S. Attorney before a case against the taxpayer is presented to a grand jury. Taxpayers have several ways to obtain help if they believe they have been abused by IRS staff. Taxpayers may seek help from supervisors, Problem Resolution Officers (PRO), or the directors of IRS’ local district offices and service centers. They may also complain directly to IRS’ National Office. IRS Publication 1 contains information on filing complaints with supervisors, PROs, and local office directors. Serious complaints involving potential integrity issues are to be referred to IRS’ Internal Security Division for investigation. Complaints of misconduct made against upper-level managers, senior executives, and IRS’ Inspection Service staff are to be referred to the OIG in the Department of the Treasury. IRS has a nationwide Problem Resolution Program, headed by the Taxpayer Ombudsman at the National Office and carried out by PROs in IRS’ 63 district offices and 10 service centers. PROs can help taxpayers who have been unable to resolve their problems after repeated attempts with other IRS staff. For example, PROs can help taxpayers who believe (1) their tax accounts are incorrect, (2) a significant item was overlooked, or (3) their rights were violated. PROs can ensure that action is taken when taxpayers’ rights were not protected, correct procedures were not followed, or incorrect decisions were made. PROs can also use authority provided by the Taxpayer Bill of Rights to order that an enforcement action be stopped or other action be taken when a taxpayer faces a significant hardship as a result of an IRS enforcement action. A significant hardship may occur when, as a result of the enforcement action, a taxpayer cannot maintain necessities such as food, clothing, shelter, transportation, or medical treatment. PROs do not resolve technical or legal questions. Such questions, as well as taxpayer complaints of harassment and discourteous treatment by IRS staff, are to be referred to IRS managers. PROs are to refer complaints involving potential employee integrity issues to Internal Security or, if a senior IRS official is involved, to the Treasury OIG. IRS’ Internal Security Division is required to investigate taxpayer complaints involving potential criminal misconduct, such as embezzlement by IRS staff and potential administrative misconduct, such as unauthorized access to a taxpayer’s account. Internal Security is to report its investigative results to IRS management for its use in determining appropriate personnel action. In addition, Internal Security can refer criminal violations to the local U.S. Attorney for prosecution. Internal Security is to refer other allegations of misconduct, such as discourteous treatment of taxpayers, to management officials. When handling these referrals and other less serious taxpayer complaints, supervisors are required to obtain a full explanation from both the taxpayer and employee before deciding how to resolve the problem. If they cannot determine how to resolve the problem, supervisors are to refer the unresolved complaints to the PRO. Although IRS’ Internal Audit Division usually neither receives nor investigates taxpayer complaints, in addition to performing its mission of reviewing IRS’ operations, it can review the results of Internal Security investigations. Both types of reviews could identify potential internal control weaknesses, some of which may identify possible taxpayer abuse. When such weaknesses are identified, Internal Audit can recommend that IRS management strengthen the controls in question. Internal Audit findings are to be disseminated to IRS’ district offices, so that similar potential control problems in other offices can be identified and acted upon. Thus, Internal Audit can serve as an important aid to management oversight. The OIG in the Department of the Treasury is to play an oversight role in protecting taxpayers from abuse. Soon after the OIG was established by Congress, allegations of misconduct by IRS officials led the Commissioner of Internal Revenue to transfer staff and funds to the OIG for investigating allegations involving IRS officials above grade 14 of the General Schedule. The OIG also conducts reviews of IRS’ Internal Security and Internal Audit Divisions, and it has the authority to review any IRS activity the Inspector General believes warrants such attention. In the 1980s, both new laws and new IRS initiatives improved taxpayers’ ability to resolve problems with IRS. This has been particularly noticeable since 1988, when Congress passed the Taxpayer Bill of Rights. We believe this legislation, coupled with various IRS initiatives, such as those involving quality management, ethics and integrity, a collection appeals process, and modernizing its computer systems, has improved the potential for fair and reasonable treatment of taxpayers in their dealings with IRS. These efforts should also lessen the potential for taxpayer abuse by IRS employees. In 1988, Congress passed the Taxpayer Bill of Rights, which caused IRS to take steps to improve its interaction with taxpayers. The Act contained 21 provisions affecting a wide range of issues. For example, it clarified certain basic rights of taxpayers and required IRS to provide taxpayers with a statement of these rights. To fulfill this requirement, IRS developed Publication 1, “Your Rights as a Taxpayer,” which is to be given to all taxpayers who are subject to examination and collection actions. Among other provisions, the act clarifies a taxpayer’s right to representation in dealing with IRS and provides additional methods to resolve disputes over IRS’ interpretation and administration of the tax laws. A key provision of the act authorizes the Taxpayer Ombudsman or any designee of the Ombudsman—who reports only to the Commissioner of Internal Revenue—to issue Taxpayer Assistance Orders to rescind or change enforcement actions that caused or might cause a significant hardship for the taxpayer. Although few of these formal orders have been issued, the authority provided by the act and three key decisions IRS made to implement the act greatly strengthened the ability of the PROs to assist taxpayers. IRS decided to (1) expand the act’s definition of “hardship” to include not only hardships caused by its administration of the tax laws, but all hardships that it could reasonably relieve; (2) provide assistance, when reasonable, to hardship applicants who did not meet IRS’ hardship criteria, but who could be helped, either through the Problem Resolution Program or by another IRS unit; and (3) instruct its employees to initiate hardship applications on behalf of taxpayers when employees encountered situations that might warrant assistance. We discussed IRS’ implementation of this, and other provisions of the act in a 1991 report. Our report confirmed that IRS had assisted taxpayers who applied for hardship whether or not they met the hardship criteria. IRS statistics showed that over 32,000 taxpayers—about 70 percent of all applicants—had received assistance. (See appendix III for a detailed description of the provisions of the act.) In 1985, IRS established a Commissioner’s Quality Council and began developing a service-wide quality improvement initiative designed to identify and satisfy customers’ needs. Since that time, Internal Revenue Commissioners have defined IRS’ objectives in terms of both increasing customer service and reducing taxpayer burden. As a result of the emphasis on meeting customers’ needs, IRS developed customer service training that focuses on improving staff interaction with taxpayers in an effort to attain greater customer satisfaction and confidence. In addition to customer service training, IRS has also recently conducted customer satisfaction surveys, including surveys of those taxpayers who had been subjected to IRS’ examination and collection actions. Overall, these surveys have shown that there were more respondents who believed that IRS had treated them fairly than respondents who believed that IRS had treated them unfairly. For example, in one survey of taxpayers in general, 32 percent of the respondents gave IRS a high rating for fairly applying the tax laws and 17 percent gave IRS a low rating. In another survey of taxpayers who had been audited by IRS, 50 percent gave IRS a high rating for fair treatment and 16 percent gave IRS a low rating. In a survey of taxpayers who had been subjected to IRS collection action, 42 percent of those who responded gave IRS a high rating for fairness and 28 percent gave IRS a low rating. As a continuation of its emphasis on treating taxpayers as customers, IRS has embarked on a service-wide initiative called Compliance 2000, in which IRS staff are to use assistance and education to aid taxpayers in complying with the tax laws. A goal of this initiative is to reduce the need for examination and collection actions against those taxpayers who would voluntarily comply with the tax laws if they fully understood how to do so, thus enabling IRS to concentrate its enforcement efforts against those who intentionally fail to comply with the tax laws. If this initiative has the intended effect, more taxpayers may avoid noncompliance with the tax laws, thus reducing their interaction with IRS and the potential for taxpayer abuse. Congressional hearings in 1989 and 1990 questioned IRS’ overall standards of ethics and integrity. To address these concerns, IRS began a long-term effort to enhance its ethics and integrity programs and to improve staff awareness of integrity issues throughout the Service. As part of this effort, IRS published an Ethics Plan that called for IRS to develop and deliver ethics training to all its employees. As of September 30, 1992, 14,000 IRS managers had completed an ethics training course developed for IRS by the Josephson Institute of Ethics. As of the end of Fiscal Year 1993, IRS had provided ethics training to the remainder of its employees. In addition to developing an Ethics Plan, IRS responded to congressional concerns about whether it could adequately and independently investigate ethical misconduct on the part of its senior employees by permanently transferring 21 staff years and $1.9 million to the OIG of the Department of the Treasury. The OIG planned to use these resources to oversee IRS’ Office of Inspection, investigate allegations of misconduct by IRS senior employees, and conduct special reviews of IRS operations. Over time, IRS’ emphasis on ethics and integrity should have a positive impact on how IRS employees conduct themselves when dealing with the public. When IRS collects unpaid taxes, it is to distinguish between those taxpayers who show a sincere effort to meet their tax obligations and those who do not. If full payment is not possible, IRS collection officials are required to consider each of the payment options available to taxpayers, and attempt to find the best way for them to voluntarily pay the taxes they owe. If a taxpayer does not make an attempt to pay a tax bill, IRS may take actions to enforce the notice and demand for payment, such as (1) file a notice of federal tax lien, (2) serve a notice of levy, and (3) seize and sell a taxpayer’s property. IRS collection officials can recommend enforcement actions on the basis of contact with the taxpayer and analysis of his or her income, expenses, and assets. They have discretionary power in carrying out these actions, and their decisions often result as much from their judgment as from the payment history of the taxpayer. In reaching their determinations, collection staff are to consider such issues as whether (1) the taxpayer has a history of unreasonably delaying the collection process, (2) the taxpayer is a tax protestor, and (3) collection of the tax is threatened or in jeopardy. If a taxpayer disagrees with a revenue officer’s collection decision, he or she may raise the issue with the revenue officer’s supervisor. Alternatively, the taxpayer may contact the Problem Resolution Office to complain about collection actions. Problem Resolution officials have the authority to overturn collection decisions when issues of hardship arise. Currently, there is no formal appeals procedure for taxpayers who disagree with IRS’ collection actions, with the exception of cases involving the trust fund recovery penalty, rejected offers-in-compromise, and specified penalty issues. One provision of the taxpayer rights legislation introduced in Congress in 1992 and again in 1993 called for a pilot program to study the merits of a formal appeal procedure for taxpayers who disagree with collection enforcement actions. IRS established such a pilot program in the Indianapolis District on March 30, 1992, later expanded it, and is currently evaluating its effectiveness. IRS is gathering data on how often taxpayers appealed IRS’ collection actions, how often its decisions were upheld or reversed, the costs of such a program and its benefits to IRS and taxpayers, and the effects such a program would have on the number of IRS’ collection actions. IRS recently expanded the program to other locations and plans to eventually determine the need for a formal collection appeals process. IRS is currently implementing TSM, which is a long-term strategy to modernize IRS’ computer and telecommunications systems. While some phases of TSM are already underway, it is expected to be fully implemented early next century and should greatly enhance IRS’ capability to serve taxpayers and reduce their burden when dealing with IRS. TSM has already benefited some taxpayers. For example, one aspect of TSM—Electronic Filing—allows taxpayers to file their returns more quickly and accurately and also to receive their refunds more quickly. In the future, TSM is expected to eliminate mailing unnecessary computer generated correspondence to taxpayers who have already responded to prior notices. In addition, with proper controls, by making more information readily available to IRS staff, TSM should reduce the time it takes to answer taxpayers’ questions and resolve taxpayers’ problems, both of which could be a source of frustration and may be perceived by some taxpayers to be a form of abuse. Tax Administration: IRS’ Implementation of the Taxpayer Bill of Rights (GAO/T-GGD-92-09, Dec. 10, 1991). Tax Administration: IRS’ Implementation of the 1988 Taxpayer Bill of Rights (GAO/GGD-92-23, Dec. 10, 1991). This testimony and report assessed IRS’ implementation of seven key provisions of the 1988 Taxpayer Bill of Rights and stated that while IRS had successfully implemented them in general, there were areas in which IRS could more consistently treat taxpayers, such as notifying them when IRS cancels installment agreements. IRS Policies and Procedures to Safeguard Taxpayer Rights and the Effects of Certain Provisions of the 1976 Tax Reform Act (Testimony - Apr. 26, 1982). This testimony concluded that while there may have been instances in which IRS violated a taxpayer’s rights, we found no evidence to indicate that such instances were widespread or systemic. IRS Information Systems: Weaknesses Increase Risk of Fraud and Impair Reliability of Management Information (GAO/AIMD-93-34, Sept. 22, 1993). This report identified weaknesses in IRS’ general controls over its computer systems which resulted in various problems, such as unauthorized access to taxpayers’ account information by IRS employees. Tax Systems Modernization: Concerns Over Security and Privacy Elements of the Systems Architecture (GAO/IMTEC-92-63, Sept. 21, 1992). This report raised concerns about the need for IRS to clearly delineate responsibility for protecting the privacy of taxpayer information. Tax Administration: New Delinquent Tax Collection Methods for IRS (GAO/GGD-93-67, May 11, 1993). This report highlighted improvements that IRS could make in its lengthy and rigid collection process for delinquent tax debts. Tax Administration: IRS’ Management of Seized Assets (GAO/T-GGD-92-65, Sept. 24, 1992). This testimony stated that IRS has inadequate controls to protect taxpayer property it seizes and that IRS’ practices for disposing of seized property do not always provide the best return for the taxpayer. Tax Administration: Extent and Causes of Erroneous Levies (GAO/GGD-91-9, Dec. 21, 1990). This report showed that IRS initiated over 16,000 erroneous levies against taxpayers in Fiscal Year 1986 and recommended that IRS institute a nationwide levy verification program to significantly reduce the number of erroneous levies. Tax Administration: IRS Can Improve the Process for Collecting 100-Percent Penalties (GAO/GGD-89-94, Aug. 21, 1989). This report analyzed IRS’ process for collecting the 100-percent penalty and recommended several actions IRS should take to make the process more efficient and effective. Tax Administration: IRS Should Expand Financial Disclosure Requirements (GAO/GGD-92-117, Aug. 17, 1992). This report recommended that IRS could better detect and prevent employee conflicts of interest by expanding its financial disclosure requirements. Tax Administration: IRS’ Progress on Integrity and Ethics Issues (GAO/T-GGD-92-62, July 22, 1992). Internal Revenue Service: Status of IRS’ Efforts to Deal With Integrity and Ethics Issues (GAO/GGD-92-16, Dec. 31, 1991). This testimony and report dealt with the progress IRS has made in addressing problems we had identified related to ethics and integrity issues and suggested that IRS make better use of its management information system to monitor disciplinary actions against its employees. IRS’ Efforts to Deal With Integrity and Ethics Issues (GAO/T-GGD-91-58, July 24, 1991). Internal Revenue Service: Employee Views on Integrity and Willingness to Report Misconduct (GAO/GGD-91-112FS, July 24, 1991). This testimony and fact sheet outlined IRS’ efforts, in conjunction with the Treasury Inspector General, to deal with concerns about integrity and ethics at IRS. IRS Data on Investigations of Alleged Employee Misconduct (GAO/T-GGD-89-38, July 27, 1989). Tax Administration: IRS’ Data on Its Investigations of Employee Misconduct (GAO/GGD-89-13, Nov. 18, 1988). This testimony and report pointed out various weaknesses with IRS’ Internal Security Management Information System related to the outcomes of employee misconduct investigations and also highlighted IRS’ plans to develop a new and improved management information system. Andrew Macyko, Regional Assignment Manager Robert McKay, Evaluator-in-Charge Richard Borst, Senior Evaluator Bryon Gordon, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed whether the Internal Revenue Service (IRS) has adequate controls to prevent taxpayer abuse and whether additional appropriations are needed to strengthen IRS ability to prevent taxpayer mistreatment. GAO found that: (1) although IRS has undertaken several initiatives to prevent taxpayer abuse, evidence of abuse remains; (2) IRS has implemented a wide range of controls, processes, and oversight offices to govern staff behavior in their contacts with taxpayers; (3) IRS needs to better define taxpayer abuse and develop management information about its frequency and nature so that it can strengthen abuse prevention procedures, and identify and minimize the frequency of future abuses, and Congress can better evaluate IRS performance in protecting taxpayers' rights; (4) IRS needs to strengthen its controls and procedures to reduce unauthorized access to computerized tax information by IRS employees, inappropriate selection of tax returns during information gathering projects, embezzlement of taxpayers' cash payments, questionable trust fund recovery penalties, and information-handling problems that contribute to taxpayer frustration; (5) proposed taxpayer protection legislation would aid IRS in providing taxpayers with information needed to better deal with trust fund recovery penalties; (6) the allegation of potential abuse involving possible improper contacts with IRS by White House staff was unfounded; (7) the White House has provided explicit guidance for its staff regarding IRS contacts and IRS should improve its procedures for handling White House contacts; and (8) although Congress may not need to provide additional appropriations to IRS to prevent taxpayer abuse, additional appropriations may be needed to resolve IRS information-handling problems as part of its Tax Systems Modernization (TSM) program.
Power plant developers consider many factors when determining where to locate a power plant, including the availability of fuel, water, and land; access to electrical transmission lines; electricity demand; and potential environmental issues. Often, developers will consider several sites that meet their minimum requirements, but narrow their selection based on economic considerations such as the cost of accessing fuel, water, or transmission lines, or the costs of addressing environmental factors at each specific site. One key requirement for thermoelectric power plants is access to water. Thermoelectric power plants use a heat source to make steam, which is used to turn a turbine connected to a generator that makes electricity. As shown in figure 1, the water used to make steam (boiler water) circulates in a closed loop. This means the same water used to make steam is also converted back to liquid water —referred to as condensing—in a device called a condenser and, finally, moved back to the heat source to again make steam. In typical thermoelectric plants, water from a separate source, known as cooling water, flows through the condenser to cool and condense the steam in the closed loop after it has turned the turbine. Consideration of water availability during the power plant siting process can pose different challenges in different parts of the country because precipitation and, relatedly, water availability varies substantially across the United States. Figure 2 shows the total amount of freshwater withdrawn in the United States as a percentage of available precipitation. Areas where the percentage is greater than 100—where more water is withdrawn than locally renewed through precipitation—are indicative of basins using other water sources transported by natural rivers and manmade flow structures, or may indicate unsustainable groundwater use. Power plants can use various types of water for cooling—such as freshwater or saline water—and different water sources, including surface water, groundwater, and alternative water sources. An example of alternative water sources is reclaimed water such as treated effluent from sewage treatment plants. To make siting decisions, power plant developers typically consider the water sources that are available and lea costly to use. Fresh surface water is the most common water source for power plants nationally, as shown in table 1. Power plant developers must also consider what cooling technologies they plan to use in the plant. There are four general types of cooling technologies. Traditional cooling technologies that have been used for decades include once-through and wet recirculating cooling systems. Advanced cooling technologies that have focused on reducing the amount of cooling water used are relatively newer in the United States and include dry cooling and hybrid cooling. Specifically: Once-through cooling systems. In once-through cooling systems, large amounts of cooling water are withdrawn from a water body such as a la river, or ocean, and used in the cooling loop. As shown in figure 3, the cooling water passes through the tubes of a condenser. As steam in the boiler water loop exits the turbine, it passes over the condenser tubes. This contact with the condenser tubes cools and condenses the steam back into boiler water for reuse. After the cooling water passes through the condenser tubes, it is discharged back into the water body warmer than it was when it was withdrawn. Once-through cooling systems withdraw a significant amount of water but directly consume almost no water. However, because the water discharged back into the wate warmer, experts believe that once-through systems may increase evaporation from the receiving water body. Furthermore, because of r body is concerns about the harm withdrawal for once-through systems can have on aquatic life—when aquatic organisms are pulled into cooling systems, trapped against water intake screens, or their habitat is adversely affected by warm water discharges—these systems are rarely installed at new plants. Although a number of federal agencies collect data on water, two collect key data that are used to analyze the impacts of thermoelectric power plants and water availability: USGS and EIA. USGS’s mission is to provide reliable scientific information to manage water, energy and other resources, among other things. USGS collects surface water and groundwater availability data through a national network of stream gauges and groundwater monitoring stations. USGS currently monitors surface and groundwater availability with approximately 7,500 streamflow gauges and 22,000 groundwater monitoring stations located throughout the United States. USGS compiles data and distributes a report every 5 years on national water use that describes how various sectors, such as irrigation, mining, and thermoelectric power plants, use water. USGS data related to thermoelectric power plants include (1) water withdrawal data at the state and county level organized by cooling technology—once-through and wet recirculating; (2) water source—surface or groundwater; and (3) whether water used was fresh or saline. USGS compiles water use data from multiple sources, including state water regulatory officials, power plant operators, and EIA. If data are not available for a particular state or use, USGS makes estimates. EIA’s mission is to provide policy-neutral data, forecasts, and analyses to promote sound policy making, efficient markets, and public understanding regarding energy and its interaction with the economy and the environment. In carrying out this mission, EIA collects a variety of energy and electricity data nationwide, about topics such as energy supply and demand. For certain plants producing 100 megawatts or more of electricity, EIA collects data on water withdrawals, consumption, discharge, as well as some information on water source and cooling technology type. EIA annually collects water use data directly from power plants by using a survey. The variety of state water laws relating to the allocation and use of surface water can generally be traced to two basic doctrines the riparian doctrine, often used in the eastern United States, and the prior appropriation doctrine, often used in the western United States. Under the riparian doctrine, water rights are linked to land ownership— owners of land bordering a waterway have a right to use the water that flows past the land for any reasonable purpose. In general, water rights in riparian states may not be bought or sold. Landowners may, at any time, use water flowing past the land, even if they have never done so before. All landowners have an equal right to use the water, and no one gains a greater right through prior use. In some riparian states, water use is closely tracked by requiring users to apply for permits to withdraw water. In other states, where water has traditionally not been scarce, water use is not closely tracked. When there is a water shortage, water users share the shortage in proportion to their rights, or the amount they are permitted to withdraw, to the extent that it is possible to determine. Under the prior appropriation doctrine, water rights are not linked with land ownership. Instead, water rights are property rights that can be owned independent of land and are linked to priority and beneficial water use. A water right establishes a property right claim to a specific amount of water—called an allotment. Because water rights are not tied to land, water rights can be bought and sold without any ownership of land, although the rights to water may have specific geographic limitations. For example, a water right generally provides the ability to use water in a specific river basin taken from a specific area of the river. Water rights are also prioritized—water rights established first generally have seniority for the use of water over water rights established later—commonly described as “first in time, first in right.” As a result, once established, water rights retain their priority for as long as they remain valid. For example, a water right to 100 acre feet of Colorado River water established in 1885 would retain that 1885 priority and allotment, even if the right was sold by the original party who established it. Water rights also must be exercised in order to remain valid, meaning rights holders must put the water to beneficial use or their right can be deemed abandoned and terminated— commonly referred to as “use it or lose it.” When there is a water shortage in prior appropriation states, shortages fall on those who last obtained a legal right to use the water. As a result, a shortage can result in junior water rights holders losing all access to water, while senior rights holders have access to their entire allotment. For some states, the legal framework for groundwater is similar to that of surface water as they use variants of either the riparian or prior appropriation doctrine to allocate water rights. However, in other states, the allocation of groundwater rights follows other legal doctrines, including the rule of capture doctrine and the doctrine of reasonable use. Under the rule of capture doctrine, landowners have the right to all the water they can capture under their land for any use, regardless of the effect on other water users. The doctrine of reasonable use similarly affords landowners the right to water underneath their land, provided the use is restricted to an amount necessary for reasonable use. In some cases, permits may be required prior to use and additional regulation may occur if a groundwater source is interconnected with surface water. A number of state agencies may be involved in considering or approving applications to build power plants or to use water in power plants. In some states, a centralized agency considers applications to build new power plants. In other states, applications may be filed with multiple state agencies. State water regulators issue water permits for power plants and other sectors to regulate water use and ensure compliance with relevant state laws and regulations. Public Utility Commissions, or the equivalent, may also have a role in authorizing the development of a power plant. In many states where retail electricity rates are regulated, these commissions are primarily responsible for approving the rates (or prices) electric utilities charge their customers and ensuring they are reasonable. As part of approving rates, these commissions approve utility investments into such things as new power plants and, as a result, may consider whether specific power plant design and cooling technologies are reasonable. Based on figures from EIA’s 2009 Annual Energy Outlook, thermoelectric power plant generating capacity will increase by about 15 percent between 2006 and 2030. Depending on which cooling approaches are used, such an increase could further strain water resources. A variety of additional factors may also affect the availability of water for electricity generation and other uses, as well as the amount of water used to produce electricity. Some studies indicate that climate change will result in changes in local temperatures and more seasonal variations, both of which could cause increased levels of water consumption from thermoelectric power plant generation. Climate change may also result in changes in local precipitation and water availability, as well as more and longer droughts in some areas of the country. To the extent that this occurs, power plant operators may need to reduce the use of water for power plant cooling. In addition, some technologies aimed at reducing greenhouse gas emissions, such as carbon capture technologies, may require additional water. The combination of environmental laws, climate change, and the inclusion of new water intensive air emission technologies may impact water availability and require power plants operators to reduce water use in the future. In addition, since the water inlet structures used at once-through cooling plants can either trap or draw in fish and other aquatic life— referred to as impingement and entrainment—there is increased pressure to reduce the use of once-through cooling at existing plants. Advanced cooling technologies and alternative water sources can reduce freshwater use by thermoelectric power plants, leading to a number of benefits for plant developers; however, incorporating each of these options for reducing freshwater use into thermoelectric power plants also poses certain drawbacks. Benefits of reducing freshwater use may include social and environmental benefits, minimizing water-related costs, as well as increasing a developer’s flexibility in determining where to locate a new plant. On the other hand, drawbacks to using advanced cooling technologies may include potentially lower net electricity output, higher costs, and other trade-offs. Similarly, the use of alternative water sources, such as treated effluent or groundwater unsuitable for drinking or irrigation, may have adverse effects on cooling equipment, pose regulatory challenges, or be located too far from a proposed plant location to be a viable option. Power plant developers must weigh the trade-offs of these drawbacks with the benefits of reduced freshwater use when determining what approaches to pursue, and must consider both the economic costs over a plant’s lifetime and the regulatory climate. For example, in a water- scarce region of the country where water costs are high and there is significant regulatory scrutiny of water use, a power plant developer may opt for a water-saving technology despite its drawbacks. Advanced cooling technologies under development and in limited commercial use and alternative water sources can reduce the amount of freshwater needed by plants, resulting in a number of benefits to both the environment and plant developers. As shown in table 2, dry cooling can eliminate nearly all the water withdrawn and consumed for power plant cooling. Hybrid cooling systems, depending on design, can reduce water use— generally to a level between that of a wet recirculating system with cooling towers and a dry cooling system. According to the Electric Power Research Institute, hybrid systems are typically designed to use 20-80 percent of the water used for a wet recirculating system with cooling towers. In addition to using advanced cooling technologies, power plant operators can reduce freshwater use by utilizing water sources other than freshwater. Alternative water sources include treated effluent from sewage treatment plants; groundwater that is unsuitable for drinking or irrigation because it is high in salts or other impurities; sea water; industrial water and water generated when extracting minerals like oil, gas, and coal. For example, the oil and gas production process can generate wastewater, which is the subject of research as a possible source of cooling water for power plants. Use of alternative water sources by power plants is increasing in some areas, and two power plant developers we spoke with said they routinely consider alternative water sources when planning new power plants, particularly in areas where water has become scarce, tightly regulated, or both. A 2007 report by the DOE’s Argonne National Laboratory identified at least 50 power plants in the United States that use reclaimed water for cooling and other purposes, with Florida and California having the largest number of plants using reclaimed water. According to the report, the use of reclaimed water at power plants has become more common, with 38 percent of the plants using reclaimed water doing so after 2000. One example of a power plant using an alternative to freshwater is Palo Verde, located near Phoenix, Arizona—the largest U.S. nuclear power plant, with a capacity of around 4,000 megawatts. Palo Verde uses approximately 20 billion gallons of treated effluent annually from treatment plants that serve several area municipalities, comprising over 1.5 million people. Reducing the amount of freshwater needed for cooling leads to a number of social and environmental benefits and may benefit developers by lowering water-related costs and providing more flexibility in choosing a location for a new plant, among other things. Reducing the amount of freshwater used by power plants through the use of advanced cooling technologies and alternative water sources has the potential to produce a number of social and environmental benefits. For example, limiting freshwater use may reduce the impact to the environment associated with withdrawals, consumption, and discharge. Freshwater is in high demand across the United States. Reducing freshwater withdrawals and consumption by the electricity sector makes this limited resource more available for additional electricity production or competing uses, such as public water supplies or wildlife habitat. Furthermore, eliminating water use for cooling entirely, such as by using dry cooling, could minimize or eliminate the water discharges from power plants, a possible source of heat and pollutants to receiving water bodies, although regulations limit the amount of heat and certain pollutants that may be discharged into water bodies. By eliminating or minimizing the use of freshwater for cooling, power plant developers may reduce some water-related costs, including the costs associated with acquiring, transporting, treating, and disposing of water. Depending on state water laws, a number of costs may be associated with acquiring water—purchasing a right to use water, buying land with a water source on or underneath it, or buying a quantity of freshwater from a municipal or other source. Eliminating the need to purchase water for cooling by using dry cooling could reduce these water-related expenses. Using an alternative water source, if less expensive than freshwater, could reduce the costs of acquiring water, although treatment costs may be higher. Power plant developers and an expert from a national laboratory told us the costs of acquiring an alternative water source are sometimes less than freshwater, but vary widely depending on its quality and location. In addition to lowering the costs associated with acquiring water, if water use for cooling is eliminated entirely, plant developers may eliminate the need for a pipeline to transport the water, as well as minimize costs associated with treating the water. Water-related costs are one of several costs that power plant developers will consider when evaluating alternatives to freshwater. Since the cost of freshwater may rise as demand for freshwater increases, a developer’s ability to minimize power plant freshwater use could become increasingly valuable over time. Minimizing or eliminating the use of freshwater may offer a plant developer increased flexibility in determining where to locate a power plant. According to power plant developers we spoke with, siting a power plant involves balancing factors such as access to fuel, including natural gas pipelines, and access to large transmission lines that carry the electricity produced to areas of customer demand. Some explained that finding a site that meets these factors and also has access to freshwater can be challenging. Power plant developers we spoke with said options such as dry cooling and alternative water sources have offered their companies the flexibility to choose sites without freshwater, but with good access to fuel and transmission. According to power plant developers and an expert from a national laboratory we spoke with, eliminating or lowering freshwater use can lead to other benefits, such as minimizing regulatory hurdles like the need to acquire certain water permits. Furthermore, using a nonfreshwater source may be advantageous in areas with more regulatory scrutiny of or public opposition to freshwater use. Despite the benefits associated with the lower freshwater requirements of advanced cooling technologies, these technologies have a number of drawbacks related to electricity production and costs that power plant developers will have to consider during their decisionmaking process. Despite the many benefits advanced cooling technologies offer, both dry cooling and hybrid cooling technologies may reduce a plant’s net energy production to a greater extent than traditional cooling systems—referred to as an “energy penalty.” Energy penalties result in less electricity available outside the plant, which can affect plant revenues, and making up for the loss of this electricity by generating it elsewhere can result in increases in water use, fuel consumption, and air emissions. Energy penalties result from (1) energy consumed to run cooling system equipment, such as fans and pumps, and (2) lower plant operating efficiency—measured as electricity production per unit of fuel—in hot weather due to lower cooling system performance. Specifically, energy penalties include: Energy needed for cooling system equipment. Cooling systems, like many systems in a power plant, use electricity produced at the plant to operate, which results in less electricity available for sale. According to experts we spoke with, because dry cooling systems and hybrid cooling systems rely on air flowing through a condenser, energy is needed to run fans that provide air flow, and the amount of energy needed to run cooling equipment will depend on such factors as system design, season, and region. A 2001 EPA study estimated that for a combined cycle plant, energy requirements to operate a once-through system (pumps) are 0.15 percent of plant output, 0.39 percent of plant output for a wet recirculating system with cooling towers (pumps and fans), and 0.81 percent of plant output for a dry cooled system (fans). Plant operating efficiency and cooling system performance. Plants using a dry cooling component, whether entirely dry cooled or in a hybrid cooled configuration, may face reduced operating efficiency under certain conditions. A power plant’s operating efficiency is affected by the performance of the cooling system, among other things, and power plants with systems that cool more effectively produce electricity more efficiently. A cooling system’s effectiveness is influenced both by the design of the cooling system and ambient conditions that determine the temperature of that system’s cooling medium—water in once-through and wet recirculating systems and air in dry cooling systems. In general, the effectiveness of a cooling system decreases as the temperature of the cooling medium increases, since a warmer medium can absorb less heat from the steam. Once-through systems cool steam using water being withdrawn from the river, lake, or ocean. Wet recirculating systems with cooling towers, on the other hand, use the process of evaporation to cool the steam to a temperature that approaches the “wet-bulb temperature”— an alternate measure of temperature that incorporates both the ambient air temperature and relative humidity. In contrast, dry cooled systems transfer heat only to the ambient air, without evaporation. As a result, dry cooled systems can cool steam only to a temperature that approaches the “dry-bulb temperature”—the measure of ambient air temperature measured by a standard thermometer and with which most people are familiar. In general, once-through systems tend to cool most effectively because the temperature of the body of water from which cooling water is drawn is, on average, lower than the wet- or dry-bulb temperature. Moreover, wet-bulb temperatures are generally lower than dry-bulb temperatures, often making recirculating systems more effective at cooling than dry cooled systems. Further, according to one report that we reviewed, greater fluctuations in dry-bulb temperatures seasonally and throughout the day can make dry cooled systems harder to design. Dry bulb temperatures can be especially high in hot, dry parts of the country, such as the Southwest, leading to significant plant efficiency losses during periods of high temperatures, particularly during the summer. According to experts and power plant developers we spoke with, plant efficiencies may witness smaller reductions during other parts of the year when temperatures are lower or in cooler climates. Nevertheless, in practice, lower cooling system performance can result in reduced plant net electricity output or greater fuel use if more fuel is burned to produce electricity to offset efficiency losses. Plant developers can take steps to reduce efficiency losses such as by installing a larger dry cooling system with additional cooling capability, but such a system will result in higher capital costs. A plant’s total energy penalty will be a combination of both effects described—energy needed for cooling system equipment and the impact of cooling system performance on plant operating efficiency. Energy penalties may result in lost revenue for the plant due to the net loss in electricity produced for a given unit of fuel, especially during the summer when electricity demand and prices are often the highest. Energy penalties may also affect the price consumers pay for electricity in a regulated market, if the cost of the additional fuel needed to produce lost electricity is passed on to consumers by regulators. Finally, energy penalties may affect emissions of pollutants and carbon dioxide if lost output is made up for by an emissions producing power plant, such as a coal- or natural gas- fueled power plant. This is because additional fuel is burned to produce electricity that offsets what was lost as a result of the energy penalty, and, thus, additional carbon dioxide and other pollutants are released. Recent studies comparing total energy penalties between cooling systems have used differing methodologies to estimate energy penalties and have reached varying conclusions. For example, a 2001 EPA study estimates the national average, mean annual energy penalties—lower electricity output—for plants operating at two-thirds capacity with dry cooling to be larger than those with wet recirculating systems with cooling towers. In this study, EPA estimated penalties of 1.7 percent lower output for a combined cycle plant with a dry system compared to a wet recirculating system with a cooling tower, and 6.9 percent lower output for a fossil fueled plant run fully on steam, such as a coal plant. Similarly, a separate study conducted by two DOE national labs in 2002 estimated larger annual energy penalties for hypothetical 400 megawatt coal plants in multiple regions of the country retrofitted to dry cooling—these penalties ranged between 3 to 7 percent lower output on average for a plant retrofitted with a dry cooled system compared to a plant retrofitted with a wet recirculating system with a cooling tower. On the hottest 1 percent of temperature conditions during the year, this energy penalty rose to between 6 and 10 percent lower output for plants retrofitted to dry cooling compared with those retrofitted to a wet recirculating system with cooling towers. However, some experts we spoke with told us energy penalties are higher in retrofitted plants than when a dry cooled system is designed according to the unique specifications of a newly built plant. A 2006 study conducted for the California Energy Commission estimated electricity output and other characteristics for new, theoretical combined cycle natural gas plants in four climatic zones of California using different cooling systems. The study found that dry cooling systems result in significant water savings, but that plants using wet cooling systems generally experience higher annual net electricity output, as shown in table 3, and lower fuel consumption. Furthermore, while the study estimates that plant capacity to produce electricity is limited on hot days for both types of cooling systems, the hot day capacity of the dry cooled plant to produce electricity is up to 6 percent lower than the wet recirculating plant with cooling tower. Power plant developers can take steps to address the energy penalties associated with dry cooling technology by designing their plants with larger dry cooled systems capable of performing better during periods of high ambient temperatures. Alternatively, they can use a hybrid technology that supplements the dry system with a wet recirculating system with a cooling tower during the hottest times of the year. However, in making this decision, developers must weigh the trade-offs between the costs associated with building and operating a larger dry cooled system or a hybrid system and the benefits of lowering their energy penalties. According to some power plant developers and experts we spoke with, another drawback to using dry and hybrid cooling technologies is that these technologies typically have higher capital costs. Experts, power plant developers, and studies indicated that while capital costs for each system can vary significantly, as a general rule, capital costs are lowest for once-through systems, higher for wet recirculating systems, and highest for dry cooling. Some told us the capital costs of hybrid systems—as a combination of wet recirculating and dry cooling systems—generally fall in between these two systems. Furthermore, according to some of the experts we spoke with and studies we reviewed, the capital costs of a plant’s cooling system vary based on the specific characteristics of a given plant, such as the costs of the cooling towers, the circulating water lines to transport water to and around the plant, pumps, fans, as well as the extent to which a dry cooled system is sized larger to offset energy penalties. As with energy penalties, studies estimating capital costs for dry and hybrid systems have used differing methodologies and provide varying estimates of capital costs. One study by the Electric Power Research Institute estimated dry cooling system capital costs for theoretical 500 megawatt combined cycle plants in 5 climatic locations to be 3.6 to 4.0 times that of wet recirculating systems with cooling towers. Experts from an engineering firm we spoke with also explained that capital costs for dry and hybrid cooled systems can be many times that of a wet recirculating system with cooling towers. They estimated that, in general, installing a dry system on a 500 megawatt combined cycle plant instead of a wet recirculating system with a cooling tower could increase baseline capital costs by $9 to $24 million, depending on location—an increase in baseline capital costs that is 2.0 to 5.1 times higher than if a wet recirculating system with a cooling tower were used. They estimated dry cooling to be more costly on a 500 megawatt coal plant, with dry cooling resulting in an increase in baseline capital costs that was 2.6 to 7.0 times higher than if a wet recirculating system with a cooling tower were used. With respect to annual costs, according to experts we spoke with and studies we reviewed, annual cost differences between alternative cooling technologies and traditional cooling technologies are variable and may depend on such factors as the costliness of obtaining and treating water, the extent to which cooling water is reused within the system, the need for maintenance, the extent to which energy penalties result in lost revenue, and the extent to which a cooling system is sized larger to offset energy penalties. Estimates from four reports we reviewed calculated varying cooling system annual costs for a range of plant types and locations using different methodologies, and found annual costs of dry systems to generally range from one and a half to four times those of wet recirculating systems with cooling towers. One of these studies, however, in examining the potential for higher water costs, found that dry cooling could be more economical on an annual basis in some areas of the country with expensive water or become more economical in the future if water costs were to rise. Furthermore, an expert from an engineering firm we spoke with explained that cooling system costs are only one component of total plant costs, and that while one cooling system may be expensive relative to another, its impact on total plant costs may not be as significant in a relative sense if the plant’s total costs are high. There may be other drawbacks to dry cooled technology, including space and noise considerations. Towers, pumps, and piping for both dry cooled and wet cooled systems with cooling towers require substantial space, but according to experts we spoke with, dry cooled systems tend to be larger. For example, according to one expert we spoke with, a dry cooled system for a natural gas combined cycle plant that derives one-third of its electricity from the steam cycle could be almost as large as two football fields. Moreover, according to others, the large size of dry cooling systems needed for plants that derive all of their electricity production from the steam cycle—for example, nuclear and coal plants—may make the use of dry cooling systems less suitable for these kinds of power plants. Experts we spoke with explained that because full steam plants produce all of their electricity by heating water to make steam, they require larger cooling systems to condense the steam back into usable liquid water. As a result, the size of a dry cooling system for a full steam plant could be three times that of a dry cooling system for a similarly-sized combined cycle plant that only produces one-third of its electricity from the steam cycle. Furthermore, according to one expert we spoke with, the most efficient type of dry cooled technology may not be approved for use with certain nuclear reactors, because of safety concerns. Finally, the motors, fans, and water of both dry cooled and wet recirculating systems with cooling towers may create noise that disturbs plant employees, nearby residents, and wildlife. Noise-reduction systems may be used to address this concern, although they introduce another cost trade-off that plant developers must consider. Despite the growth in plants using alternative water sources, there are a number of drawbacks to using this water source instead of freshwater. While some of these drawbacks are similar to those faced by power plants that use freshwater, they may be exacerbated by the lower quality of alternative water sources. These drawbacks include adverse effects to cooling equipment, regulatory compliance issues, and access to alternative water sources, as follows. Water used in power plants must meet certain quality standards in order to avoid adverse effects to cooling equipment, such as corrosion, scaling, and the accumulation of micro or macrobiological organisms. While freshwater can also cause adverse effects, the generally lower quality of alternative water sources make them more likely to result in these effects. For example, effluent from a sewage treatment plant may be higher in ammonia than freshwater, which can cause damage to copper alloys and other metals. High levels of ammonia and phosphates can also lead to excessive biological growth on certain cooling tower structures. Chemical treatment is used to mitigate such adverse effects of alternative water sources when they occur, but this treatment results in additional costs. According to one power plant operator we spoke with, alternative water sources often require more extensive and expensive treatment than freshwater sources, and it can be a challenging process to determine the precise makeup of chemicals needed to minimize the adverse effects. Power plant developers using alternative water sources may face additional regulatory challenges. Depending on their design, power plants may discharge water directly to a water source, such as a surface water body, or release water into the air through cooling towers. As a result, power plants must comply with a number of water quality and air regulations, and the presence of certain pollutants in alternative water sources can make compliance more challenging. For example, reclaimed water from sewage treatment plants is treated to eliminate bacteria and other contaminants that can be harmful to humans. Similarly, water associated with minerals extraction may contain higher total dissolved and suspended solids and other constituents, which could adversely affect the environment if discharged. Addressing these issues through the following actions entail additional costs to the power plant operators: (1) chemical treatment prior to discharging water to another water source, (2) discharging water to a holding pond unconnected to another water source for evaporation, or (3) eliminating all liquid discharges by, for example, evaporating all the water used at the plant and disposing of the resulting solid waste into a facility such as a landfill. As with freshwater sources, the proximity of an alternative water source may be a drawback that power plant developers have to consider when pursuing this option. Power plant developers wishing to use an alternative water source must either build the plant near that source—which can be challenging if that water source is not also near fuel and transmission lines—or pay the costs of transporting the water to the power plant’s location, such as through a pipeline. Furthermore, two power plant developers we spoke with told us that certain alternative water sources, like treated effluent, are in increasing demand in some parts of the country, making it more challenging or costly to obtain than in the past. A power plant developer may want to reduce the use of freshwater for a number of reasons, such as when freshwater is unavailable or costly to obtain, to comply with regulatory requirements, or to address public concern. However, power plant developers we spoke with told us that when considering the viability of an advanced cooling technology or alternative water source, they must weigh the trade-offs between the water savings and other benefits these alternatives offer with the drawbacks to their use. For example, in a water-scarce region of the country where water costs are high and there is much regulatory scrutiny of water use, a power plant developer may determine that, despite the drawbacks associated with the use of advanced cooling technologies or alternative water sources, these alternatives still offer the best option for getting a potentially profitable plant built in a specific area. Furthermore, according to power plant developers we spoke with, these decisions have to be made on a project by project basis because the magnitude of benefits and drawbacks will vary depending on a plant’s type, location, and the related climate. For example, dry cooling has been installed in regions of the country where water is relatively plentiful, such as the Northeast, to help shorten regulatory approval times and avoid concerns about the adverse impacts that other cooling technologies might have on aquatic life. In making a determination about what cooling technology to use, power plant developers evaluate the net economic costs of alternatives like dry cooling or an alternative water source—its savings compared to its costs— over the life of a proposed plant, as well as the regulatory climate. Experts we spoke with told us this involves consideration of both capital and annual costs, including how expected water savings compare to costs related to energy penalties and other factors. Anticipated future increases in water-related costs could prompt a developer to use a water-saving alternative. For example, a recent report by the Electric Power Research Institute estimates that a power plant’s economic trade-offs vary considerably depending on its location and that high water costs could make dry cooling less expensive annually than wet cooling. The National Energy Technology Laboratory is funding research and development projects aimed at minimizing the drawbacks of advanced cooling technologies and alternative water sources. In 2008, the laboratory awarded close to $9 million to support research and development of projects that, among other things, could improve the performance of dry cooled technologies, recover water used to reduce emissions at coal plants for reuse, and facilitate the use of alternative water sources in cooling towers. Such research endeavors, if successful and deemed economical, could alter the trade-off analysis power plant developers conduct in favor of nontraditional alternatives to cooling. The seven states that we contacted––Alabama, Arizona, California, Georgia, Illinois, Nevada, and Texas––vary in the extent to which they consider the impacts that power plants will have on water when they review power plant water use proposals. Specifically, these states have differences in water laws that may influence their oversight of power plant water use. Some also have other regulatory policies and requirements specific to power plants and water use. Still other states require additional levels of review that may affect their states’ oversight of how power plants use water. Differences in water laws in the seven states we contacted––Alabama, Arizona, California, Georgia, Illinois, Nevada, and Texas––influence the steps that power plant developers need to take to obtain approval to use surface or groundwater, and provide for varying levels of regulatory oversight of power plant water use. Table 4 shows the differences in water laws and water permitting for the seven states we contacted. With regard to surface water—the source of water most often used for power plant cooling nationally—of the seven states we contacted, all but Alabama required power plant developers to obtain water permits through the state agency that regulates the water supply. However, the states requiring permits varied in how the permits were obtained and under what circumstances. For example, in general, under Illinois law, water supply permits are only necessary if the surface water is defined as a public water body, which covers most major navigable lakes, rivers, streams, and waterways as defined by the Illinois Office of Water Resources. However, for any other surface water body, such as smaller rivers and streams, no such permit is required. To obtain a permit to use water in a power plant in Illinois, developers must file an application with the Illinois Office of Water Resources. In determining whether to issue a permit, the Office of Water Resources requires the applicant to address public comments and evaluates USGS streamflow data to determine whether restrictions on water use are needed. In some instances, such as to support fish and other wildlife, the state may designate a minimum level of flow required for a river or stream and restrict the amount of water that can be used by a power plant or other water user when that minimum level is reached. The Director of the Office of Water Resources told us that the office has sometimes encouraged power plant operators to establish backup water sources, such as onsite reservoirs, for use when minimum streamflow levels are reached and water use is restricted. In contrast, under Georgia and Alabama riparian law, landowners have the right to the water on and adjacent to their land, and both states require users who have the capacity to withdraw (Alabama) or actually withdraw (Georgia) an average of more than 100,000 gallons per day to provide information to the state concerning their usage and legal rights to the water. However, this requirement is applied differently in the two states. Alabama requires that water users register their planned water use for informational purposes with the Alabama Office of Water Resources but does not require users to obtain a permit for the water withdrawal or conduct analysis of the impact of the proposed water use. In contrast, Georgia requires water users to apply for and receive a water permit from the Georgia Environmental Protection Division. In determining whether to issue a permit for water use, this Georgia agency analyzes the potential effect of the water use on downstream users and others in the watershed. State water regulators in Georgia told us they have never denied an application for water use in a power plant due to water supply issues since there has historically been adequate available water in the state. For more details on Georgia’s process for approving water use in power plants, see appendix IV. Groundwater laws in the selected states we reviewed also varied and affected the extent to which state regulators provided oversight over power plant water use. In four of the seven states––Alabama, California, Illinois, and Texas––groundwater is largely unregulated at the state level, and landowners may generally freely drill new wells and use groundwater as they wish unless restricted by local entities, such as groundwater conservation districts. However, in three of the seven states we contacted—Arizona, Georgia, and Nevada—state-issued water permits are required for water withdrawals for some or all regions of the state. For example, in Nevada, which has 256 separate groundwater basins, and in which most of the in-state power generation uses groundwater for cooling, state water law follows the doctrine of prior appropriation. A power plant developer or other entity wanting to acquire a new water right for groundwater must apply for a water permit with the Nevada Division of Water Resources. In evaluating the application for a water permit, the Division determines if water is available—referred to as unappropriated; whether the proposed use will conflict with existing water rights or domestic wells; and whether the use of the water is in the public interest. In determining whether groundwater is available, if the Division of Water Resources determines that the amount of water that replenishes the groundwater basin annually is greater than the existing committed ground water rights in a given basin, unappropriated water may be available for appropriation. In two cases where groundwater was being considered for possible power plants, the State Engineer, the official in the Division of Water Resources who approves permits, either denied the application or expressed reservations over the use of groundwater for cooling. For example, in one case, the State Engineer noted that large amounts of water should not be used in a dry state like Nevada when an alternative, like dry cooling, that is less water intensive was available. In contrast, in Texas, where 8 percent of in state electricity capacity uses groundwater for cooling, state regulators do not issue groundwater use permits or routinely review a power plant or other users’ proposed use of the groundwater. Texas groundwater law is based on the “rule of capture,” meaning landowners, including developers of power plants that own land, have the right to the water beneath their property. Landowners can pump any amount of water from their land, subject to certain restrictions, regardless of the effect on other wells located on adjacent or other property. Although Texas state water regulators do not issue water permits for the use of groundwater, in more than half the counties in Texas, groundwater is managed locally through groundwater conservation districts which are generally authorized by the Texas Legislature and ratified at the local level to protect groundwater. These districts can impose their own requirements on landowners to protect water resources. This includes requiring a water use permit and, in some districts, placing restrictions on the amount of water used or location of groundwater wells for landowners. Oversight of water use by proposed power plants in the selected states may be influenced by regulatory policies and requirements that formally emphasize minimizing freshwater use by power plants and other new industrial users. With respect to regulatory policies, of the 7 states, California and Arizona have established formal policies or requirements to encourage power plant developers to consider alternative cooling methods and reduce the amount of freshwater used in a proposed power plant. Specifically: California, a state that has faced constrained water supplies for many years, established a formal policy in 1975 that requires applicants seeking to use water in power plants to consider alternative water sources before proposing the use of freshwater. More recently, the California Energy Commission, the state agency that is to review and approve power plant developer applications, reiterated in its 2003 Integrated Energy Policy Report, the 1975 policy that the commission would only approve power plants using freshwater for cooling in limited circumstances. Furthermore, state regulators at the Commission told us that in discussing potential new power plant developer applications, commission staff encourage power plant developers to consider using advanced cooling technologies, such as dry cooling or alternative water sources, such as effluent from sewage treatment plants. Between January 2004 and April 2009, California regulators approved 10 thermoelectric power plants—3 that will use dry cooling; 6 that will use an alternative water source, such as reclaimed water; and 2 that will use freshwater purchased from a water supplier, such as a municipal water district, for power plant cooling. Of 20 additional thermoelectric power plant applications pending California Energy Commission approval, developers have proposed 11 plants that plan to use dry cooling, 8 plants that plan to use an alternative water source, and 1 that plans to use freshwater for cooling. For more details on California’s process for approving water use in power plants, see appendix III. In Arizona, where there is limited available surface water and where groundwater is commonly used for power plant cooling, the state has requirements to minimize how much water may be used by power plants. Specifically, in Active Management Areas—areas the state has determined require regulatory oversight over the use of groundwater—the state requires that developers of new power plants 25 megawatts or larger using groundwater in a wet recirculating system with a cooling tower, design the plants to reuse the cooling water to a greater extent than what is common in the industry. Plants must cycle water through the cooling loop at least 15 times before discharging it, whereas, according to an Arizona public utility official, outside of Active Management Areas plants would generally cycle water 3 to 7 times. These additional cycles result in water savings, since less water must be withdrawn from ground or surface water sources to replace discharges, but can require plant operators to undertake more costly and extensive treatment of the cooling water and to more carefully manage the plant cooling equipment to avoid mineral buildup. Arizona officials also told us they encourage the use of alternative water sources for cooling and have informally encouraged developers to consider dry cooling. According to Arizona state officials, no plants with dry cooling have been approved to date in the state and, due mostly to climatic conditions, dry cooling is probably too inefficient and costly to currently be a viable option. For details on Arizona’s process for approving water use in power plants, see appendix II. In contrast to California and Arizona, water supply and public utility commission officials in the other 5 selected states told us their states had not developed official state policies regarding water use by power plants. For example, Alabama, a state where water has traditionally been plentiful, has not developed a specific policy related to power plant water use or required the use of advanced cooling technologies or alternative water sources. Additionally, the state does not require that power plant developers and other proposed water users seek a water use permit; rather power plant operators are only required to register their maximum and average expected water use with the state and report annual usage. State officials told us that they require this information so that they can know how much water is being used but that their review of power plant water use is limited. Officials from the state’s Public Service Commission, responsible for certifying the development of power plants, said their office does not have authority to regulate a utility’s water use and, therefore, generally does not analyze how a proposed power plant will affect the water supply. Rather, their office focuses on the reasonableness of power plant costs. Similarly, Illinois, where most power plants use surface water for cooling and water is relatively plentiful, has not developed a policy on water use by thermoelectric power plants or required the use of advanced cooling technologies or alternative water sources, according to an official at the Office of Water Resources. However, the Illinois Office of Water Resources does require power plant operators, like other proposed water users, to apply for water permits for use of surface water from the major public water bodies. Three of the states we selected––Arizona, Nevada, and California–– conduct regulatory proceedings that consider water availability, in addition to determining whether to issue a water permit, while the other states do not. In Arizona, water use for power plants is subject to three reviews: (1) the process for a prospective water user to obtain a water permit, if required; (2) review by a committee of the Arizona Corporation Commission, known as the Arizona Power Plant and Transmission Line Siting Committee; and (3) review by the Commission as part of an overall evaluation of the plant’s feasibility and its potential environmental and economic impacts. Both the Committee and Commission evaluate water supply concerns, along with other environmental issues, and determine whether to recommend (Committee) or issue (Commission) a Certificate of Environmental Compatibility, which is necessary for the plant to be approved. Water supply concerns have been a factor in denying such a certificate for a proposed power plant. For example, in 2001, the Commission denied an application to build a new plant over concerns that groundwater withdrawals for cooling water would not be naturally replenished and, thereby, would reduce surface water availability which could adversely affect the habitat for an endangered species. For more details on Arizona’s processes for approving water use in power plants see appendix II. Similarly, in Nevada and California, several state agencies may play a role in the approval of water use and the type of cooling technology used by power plants. In Nevada, although water permits for groundwater and surface water are issued by the State Engineer, the Public Utilities Commission oversees final power plant approval under the Utility Environmental Protection Act. Even if the power plant developer has obtained a water permit, water use could play a role in the review process if the plant’s use of the cooling water or technologies has environmental effects that need to be mitigated. Additionally, as in a number of states where electricity rates are regulated, the Public Utilities Commission could consider the effect of dry cooling on electricity rates. In California, the California Energy Commission reviews all aspects of power plant certifications, including issuing any water permits and approvals for cooling technologies. According to a California Energy Commission official, during this process the Commission works with other state and local agencies to ensure their requirements are met. The other four states we contacted do not conduct reviews of how power plants will affect water availability beyond issuing a water use permit or certificate of registration. Public utility regulators in Illinois, Texas, Alabama, and Georgia told us they had no direct role in regulating water use or cooling technologies in power plants. Officials from the Public Utility Commission of Texas noted that since they do not regulate electricity rates in most of the state, the Commission plays no role in the approval of power plants in most areas. In other areas, they told us water use and cooling technologies were not reviewed by the Commission. Similarly, in Illinois—a state that does not regulate electricity rates—an official from the Illinois Commerce Commission stated that the agency had no role in reviewing water use or cooling technologies for power plants. While Georgia and Alabama are states that regulate electricity rates, officials from their Public Service Commissions—the state agencies regulating electricity rates—noted that they focus on economic considerations of power generation and not the impact that a power plant might have on the state’s water supply. State water regulators rely on data on water availability collected by USGS’s streamflow gauges and groundwater studies and monitoring stations when they are evaluating developers’ proposals for new power plants. In contrast, state water regulators do not routinely rely on federal data on water use when evaluating power plant applications, although these data are used by water and industry experts, federal agencies, and others to analyze trends in the industry. However, these users of federal data on water use identified a number of limitations with the data that they believe limits its usefulness. State water regulators, federal agency officials, and water experts we spoke with agreed that federal data on water availability are important for multiple purposes, including for deciding whether to approve power plant developer proposals for water permits and water rights. Most state water regulators we contacted explained that they rely upon federal data on water availability, particularly streamflow and groundwater data collected by USGS, for permitting decisions and said these data helped promote more informed water planning. For example, water regulatory officials from the Texas Commission on Environmental Quality—the agency that evaluates surface water rights applications from prospective water users in Texas—told us that streamflow data collected by USGS are a primary data source for their water model that predicts how water use by power plants and others applying for water rights will impact state water supplies and existing rights holders. USGS’s network of streamflow gauges and groundwater monitoring stations provide the only national data of their kind on water availability over long periods. As a result, state officials told us that these data are instrumental in predicting how much water is likely to be available in a river under a variety of weather conditions, such as droughts. For example, state regulators in Georgia and Illinois told us that they rely on USGS streamflow data to determine whether or not to establish special conditions on water withdrawal permits, such as minimum river flow requirements that affect the amount of cooling water a power plant can withdraw during periods when water levels in the river are low. State water regulators in Nevada also told us they rely on a number of data sources, including USGS groundwater studies, to determine the amount of time necessary for water to naturally refill a groundwater basin. This information helps them ensure that water withdrawals for power plants and others are sustainable and do not risk depleting a groundwater basin. State regulators told us that while federal water availability data is a key input into their decisionmaking process for power plant permits, they also rely on a number of other sources of data, as shown in table 5. These include data that they themselves collect and data collected by universities; private industry, such as power plant developers; and various other water experts. Some state regulators and water experts we spoke with expressed concern about streamflow gauges being discontinued, which they said may make evaluating trends in water availability and water planning more difficult in the future. Without accurate data on water availability, decisions about water planning and allocation of water resources—including power plant permitting decisions—may be less informed, according to regulators and experts. For example, an official from Arizona told us that a reduction in streamflow gauges would adversely impact the quality of the states’ water programs and that state budget constraints have made it increasingly difficult to allocate the necessary state funds to ensure cooperatively- funded streamflow gauges remain operational. Similarly, an official from the Texas Commission on Environmental Quality told us that if particular streamflow gauges were discontinued, water availability records would be unavailable to update existing data for their water availability models— which are relied upon for water planning and permitting decisions—and alternative data would be needed to replace these missing data. USGS officials told us that the cumulative number of streamflow gauges with 30 or more years of record that have been discontinued has increased, as seen in figure 8, due to budget constraints. Unlike federal data on water availability, federal data on water use is not routinely relied upon by state officials we spoke with to make regulatory decisions; but, instead is used by a variety of data users to identify trends in the industry. Specifically, data users we spoke with, including water experts, representatives of an environmental group, and federal agency officials, identified the following benefits of the water use data collected by USGS and EIA: USGS Data on Water Use. A number of users of federal water data we spoke with told us that USGS’s 5-year data on thermoelectric power plant water use are the only centralized source of long-term, national data for comparing water use trends across sectors, including for thermoelectric power plants. As a result, they are valuable data for informing policymakers and the public about the state of water resources, including changes to water use among power plants and other sectors. For example, one utility representative we spoke with said that USGS data are important for educating the public about how power plants use water and the fact that while thermoelectric power plants withdraw large amounts of water overall—39 percent of U.S. freshwater withdrawals in 2000––their water consumption as an industry has been low—3 percent of U.S. freshwater consumption in 1995. Furthermore, some state water regulators told us that USGS’s water use data allow them to compare their state’s water use to that of other states and better evaluate and plan around their state’s water conditions. An Arizona Department of Water Resources official, for example, told us that USGS’s water use data are essential for understanding how water is used in certain parts of the state where the Department has no ability to collect such data. EIA Data on Water Use. EIA’s annual data are the only federally-collected, national data available on water use and cooling technologies at individual power plants; and data users noted that EIA’s national data were useful for analyzing the water use characteristics of individual plants, as well as for comparing water use across different cooling technologies. For example, officials at USGS and the National Energy Technology Laboratory told us that they use EIA data to research trends in current and future thermoelectric power plant and other categories of water use. Specifically, USGS utilizes EIA’s data on individual plant water use, in addition to data from state water regulators and individual power plants, to develop county and national estimates of thermoelectric power plant water use. USGS officials explained that in some of their state offices, such as California and Texas, agency staff primarily use EIA and other federal data to develop USGS’s 5-year thermoelectric power plant water use estimates. Officials from USGS also explained that other USGS state offices use EIA data on water use to corroborate their estimates of thermoelectric power plant water withdrawals and to identify the cooling technology utilized by power plants. Similarly, officials at the National Energy Technology Laboratory have extensively used EIA’s data on individual power plant water withdrawals and consumption to develop estimates of how freshwater use by thermoelectric power plants will change from 2005 to 2030. However, data users we spoke with also identified a number of shortcomings in the federal data on water use, collected by USGS and EIA, that limits their ability to conduct certain types of industry analyses and understanding of industry trends. Specifically, they identified the following issues, along with others that are detailed in appendix V. Lack of comprehensive data on the use of advanced cooling technologies. Currently, EIA does not systematically collect information on power plants’ use of advanced cooling technologies. In the EIA database, for example, data on power plants’ use of advanced cooling technologies is incomplete and inconsistent—not all power plants report information on their use of advanced cooling technologies or do so in a consistent way. Lacking these national data, it is not possible without significant additional work to comprehensively identify how many power plants are using advanced cooling technologies, where they are located, and to what extent the use of these technologies has reduced the use of freshwater. According to a study by the Electric Power Research Institute, although the total number of dry cooled plants is still small relative to plants using traditional cooling systems, the use of advanced cooling technologies is becoming increasingly common. As these technologies become more prevalent, we believe that information about their adoption would help policymakers better understand the extent to which advanced cooling technologies have been successful in reducing freshwater use by power plants and identify those areas of the country where further adoption of these technologies could be encouraged. EIA officials told us they formally coordinate with a group of selected stakeholders every 3 years to determine what changes are needed to EIA data collection forms. They told us they have not previously collected data on advanced cooling technologies because EIA’s stakeholder consultation process had not identified these as needed data. However, these officials acknowledged that EIA has not included USGS as a stakeholder during this consultation process and were unaware of USGS’ extensive use of their data. In discussing these concerns, EIA officials also said that they did not expect that collecting this information would be too difficult and agreed that such data could benefit various environmental and efficiency analyses conducted by other federal agencies and water and industry experts. Furthermore, in discussing our preliminary findings, EIA officials also said they believed that EIA could collect these data during its triennial review process by, for example, adding a reporting code for these types of cooling systems. However, they noted that they would have to begin the process soon to incorporate it into their ongoing review. Lack of comprehensive data on the use of alternative water sources. Our review of federal data sources indicates that they cannot be used to comprehensively identify plants using alternative water sources. EIA routinely reports data on individual plant water sources, but we found that these data do not always identify whether the source of water is an alternative source or not. Similarly, while the USGS data identify thermoelectric power plants using ground, surface, fresh, and saline water, they do not identify those using alternative water sources, such as reclaimed water. While a goal of USGS’s water use program is to document trends in U.S. water use and provide information needed to understand the nation’s water resources, USGS officials said budget constraints have limited the water use data the agency can provide, and has led to USGS discontinuing distribution of data on one alternative water source—reclaimed water. According to two studies we reviewed, use of some alternative water sources is becoming more common and, based on our discussions with regulators and power plant developers, there is much interest in this nonfreshwater option, particularly in areas where freshwater is constrained. As use of these alternative water sources becomes more prevalent, we believe that information about how many plants are using these resources and in what locations, could help policymakers better understand how the use of alternative water sources by power plants can replace freshwater use and help identify those areas of the country where such substitution could be further encouraged. Incomplete water and cooling system data. Though part of EIA’s mission is to provide data that promote public understanding of energy’s interaction with the environment, EIA does not collect data on the water use and cooling systems of two significant components of the thermoelectric power plant sector. First, in 2002, EIA discontinued its reporting of water use and cooling technology information for nuclear plants. According to data users we spoke with, this is a significant limitation in the federal data on water use and makes it more difficult for them to monitor trends in the industry. For example, USGS officials said that the lack of these data make developing their estimates for thermoelectric power plant water use more difficult because they either have to use older data or call plants directly for this information, which is resource intensive. EIA officials told us they discontinued collection of data from nuclear plants due to priorities stemming from budget limitations. Second, EIA does not collect water use and cooling system data from operators of some combined cycle thermoelectric power plants. Combined cycle plants represented about 25 percent of thermoelectric capacity in 2007, and constituted the majority of thermoelectric generating units built from 2000 to 2007. According to EIA officials, water use and cooling technology data are not collected from operators of combined cycle plants that are not equipped with duct burning technology—a technology that injects fuel into the exhaust stream from the combustion turbine to provide supplemental heat to the steam component of the plant. However, these plants use a cooling system and water, as do other combined cycle and thermoelectric power plants whose operators are required to report to the agency. As a result, data EIA currently collects on water use and cooling systems for thermoelectric power plants is incomplete. EIA officials acknowledged that not collecting these data results in an incomplete understanding of water use by these thermoelectric power plants; however, budget limitations have thus far precluded collection of such data. According to a senior EIA staff in the Electric Power Division, since speaking with GAO, the agency has begun exploring options for collecting these data as part of its current data review process. Discontinued distribution of thermoelectric power plant water consumption data. One of the stated goals of USGS’s water use program is to document trends in U.S. water use, but officials told us that a lack of funding has prompted the agency to discontinue distribution of data on water consumption for thermoelectric power plants and other water users. These USGS officials told us they would like to restart distribution of the data on water consumption by thermoelectric power plants and other water users if additional funding were made available, because such data can be used to determine the amount of water available for reuse by others. Similarly, some users of federal water data told us that not having USGS data on consumption limits their and the public’s understanding of how power plant water consumption is changing over time, in comparison to other sectors. They said that the increased use of wet recirculating technologies, which directly consume more water but withdraw significantly less than once-through cooling systems, has changed thermoelectric power plant water use patterns. In a 2002 report, the National Research Council recommended that USGS’s water use program be elevated from one of water use accounting to water science––research and analysis to improve understanding of how human behavior affects patterns of water use. Furthermore, the council’s report concluded that statistical analysis of explanatory variables, like cooling system type or water law, is a promising technique for helping determine patterns in thermoelectric power plant water use. The report suggested these and other approaches could help USGS improve the quality of its water use estimates and the value of the water data it reports. USGS has proposed a national water assessment with the goal of, among other things, addressing some of the recommendations made by the National Research Council report. USGS officials also told us such an initiative would make addressing some of the limitations in USGS water use data identified by water experts and others possible, such as reporting data on water consumption and by hydrologic code. While much of the authority for regulating water use resides at the state level, the federal government plays an important role in collecting and distributing information about water availability and water use across the country that can help promote more effective management of water resources. However, the lack of collection and reporting of some key data related to power plant water use limits the ability of federal agencies and industry analysts to assess important trends in water use by power plants, compare them to other sectors, and identify the adoption of new technologies that can reduce freshwater use. Without this comprehensive information, policymakers have an incomplete picture of the impact that thermoelectric power plants will have on water resources in different regions of the country and will be less able to determine what additional activities they should encourage for water conservation in these areas. Moreover, although both EIA and USGS seek to provide timely and accurate information about the electricity sector’s water use, they have not routinely coordinated their efforts in a consistent and formal way. As a result, key water data collected by EIA and used by USGS have been discontinued or omitted and important trends in the electricity sector have been overlooked. EIA’s ongoing triennial review of the data it collects about power plants and the recent passage of the Secure Water Act, that authorizes funding for USGS to report data on water use to Congress, provide a timely opportunity to address gaps in federal data collection and reporting and improve coordination between USGS and EIA in a cost- effective way. We are making seven recommendations. Specifically, to improve the usefulness of the data collected by EIA and better inform the nation’s understanding of power plant water use and how it affects water availability, we recommend that the Administrator of EIA consider taking the following four actions as part of its ongoing review of the data it collects about power plants: add cooling technology reporting codes for alternative cooling technologies, such as dry and hybrid cooling, or take equivalent steps to ensure these cooling technologies can be identified in EIA’s database; expand reporting of water use and cooling technology data to include all significant types of thermoelectric power plants, particularly by reinstating data collection for nuclear plants and initiating collection of data for all combined cycle natural gas plants; collect and report data on the use of alternative water sources, such as treated effluent and groundwater that is not suitable for drinking or irrigation, by individual power plants; and include USGS and other key users of power plant water use and cooling system data as part of EIA’s triennial review process. To improve the usefulness of the data collected by USGS and better inform the nation’s understanding of power plant water use and how it affects water availability, we recommend that the Secretary of the Interior consider: expanding efforts to disseminate available data on the use of alternative water sources, such as treated effluent and groundwater that is not suitable for drinking or irrigation, by thermoelectric power plants, to the extent that this information becomes available from EIA; and reinstating collection and distribution of water consumption data at thermoelectric power plants. To improve the overall quality of data collected on water use from power plants, we recommend that EIA and USGS establish a process for regularly coordinating with each other, water and electricity industry experts, environmental groups, academics, and other federal agencies, to identify and implement steps to improve data collection and dissemination. We provided a draft of this report to the Secretary of the Interior and to the Secretary of Energy for review and comment. The Department of the Interior, in a letter dated September 29, 2009, provided written comments from the Assistant Secretary for Water and Science. These comments are reprinted in appendix VI. In her letter, the Assistant Secretary agreed with GAO’s recommendations and noted the importance of improving water use data, including data on water consumption at thermoelectric power plants. The letter noted that USGS plans to reinstate data collection on water consumption as future resources allow and will expand efforts to disseminate data on alternative water use as information becomes available from EIA. In addition, USGS plans to coordinate with EIA to establish a process to identify and implement steps to improve and expand water use data collection and dissemination by the two agencies. In response to our request for comments from the Department of Energy, we received emails from the audit liaisons at the National Energy Technology Laboratory and the EIA. The laboratory’s comments note that the report accurately described the energy-water nexus as it relates to power plants and accurately documented the current state of power plant cooling technologies. These comments expressed the importance of completing a full assessment of the energy-water relationship in the future, especially in light of climate change regulations. The laboratory also provided technical comments, which we incorporated as appropriate. EIA provided technical comments, which we incorporated as appropriate. We are sending copies of this report to interested congressional committees; the Administrator of the Energy Information Administration; the Secretaries of Energy and the Interior; and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact us at (202) 512-3841 or [email protected] or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. At the request of the Chairman of the House Committee on Science and Technology, we reviewed (1) technologies and other approaches that can help reduce freshwater use by power plants and what, if any, drawbacks there are to implementation; (2) the extent to which selected states consider water impacts of power plants when reviewing power plant development proposals; and (3) the usefulness of federal water data to experts and state regulators who evaluate power plant development proposals. We focused our evaluation on thermoelectric power plants, such as nuclear, coal, and natural gas plants using a steam cycle. We did not consider the water supply issues associated with hydroelectric power, since the process through which these plants use water is substantially different from that of thermoelectric plants (e.g., water is used as it passes through a dam but is not directly consumed in the process). We also focused the review on water used during the production of electricity at power plants, and did not include water issues associated with extracting fuels used to produce electricity. To understand technologies and other approaches that can help reduce freshwater use by power plants and their drawbacks, we reviewed industry, federal, and academic studies on advanced cooling technologies and alternative water sources that discussed their benefits, such as reduced freshwater use, and what, if any, drawbacks their implementation entails. These included studies with information on power plants’ use of water and the drawbacks of nonfreshwater alternatives conducted by the Electric Power Research Institute, the Department of Energy’s National Energy Technology Laboratory, and others. We discussed these trade-offs with various experts, including power plant and cooling system manufacturers, such as GEA Power Cooling Inc., General Electric, Siemens, and SPX Cooling Technologies; other industry groups and consultants, such as the Electric Power Research Institute, Maulbetsch Consulting, Nalco, and Tetra Tech; an engineering firm, Black & Veatch; and federal, national laboratory, and academic sources. To get a user perspective on these different technologies and alternative water sources, we met with power plant operators, including Arizona Public Service Company, Calpine, Georgia Power Company, and Sempra Generation. We also spoke with representatives from and reviewed reports prepared by other National Laboratories, such as the Department of Energy’s Argonne National Laboratory, to understand related research activities concerning water and electricity. To better understand how the differences in cooling technologies and heat sources used by power plants affect power plant configuration and design, we toured three power plant facilities in Texas— Comanche Peak (nuclear, once-through cooling), Limestone (coal, wet recirculating with cooling towers), and Midlothian (natural gas combined cycle, dry cooling). To determine the extent to which selected states consider water impacts of power plants when reviewing power plant development proposals, we conducted case study reviews of three states Arizona, California, and Georgia. These states were selected because of their historic differences in water availability, differences in water law, high energy production, and large population centers. We did not attempt to determine whether states’ efforts were reasonable or effective, rather we only described what states do to consider water impacts when making power plant siting decisions. For each of these case study states, we met with state water regulators and power plant developers to understand how water planning and permitting decisions are approached from both a regulatory and private industry perspective. We also met with water research institutions and other subject matter experts to understand current and future research related to water impacts of power plants and the extent to which these research endeavors help inform power plant development proposals and regulatory water permitting decisions. Specifically, in California we met with the California Department of Water Resources; the California Energy Commission; the California State Water Resources Control Board; the San Francisco Bay Regional Water Quality Control Board; and the U.S. Geological Survey’s (USGS) California Water Science Center. In Georgia we met with the Georgia Environmental Protection Division; the Georgia Public Service Commission; the Georgia Water Resources Institute; the Metropolitan North Georgia Water Planning District; the U.S. Army Corps of Engineers, South Atlantic Division; and the USGS Georgia Water Science Center. In Arizona we met with the Arizona Corporation Commission; the Arizona Department of Environmental Quality; the Arizona Department of Water Resources; the Arizona Power Plant and Transmission Line Siting Committee; the Arizona Office of Energy, Department of Commerce; the Arizona Water Institute, and the USGS Arizona Water Science Center. In addition, we reviewed state water laws and policies for thermoelectric power plant water use, selected power plant operator proposals to use water, and state water regulators’ water permitting decisions. We also reviewed selected public utility commission dockets and testimonies describing various power plant siting decisions to understand what, if any, water issues were addressed. To broaden our understanding of how states consider the water impacts of power plants when reviewing power plant development proposals, we supplemented our case studies by conducting interviews and reviewing documents from four additional states Nevada and Alabama—which shared watersheds with the case study states—and Illinois and Texas, which are large electricity producing states with sizable population centers. For each of these four states, we spoke with the primary state water regulatory agencies—the Alabama Office of Water Resources, the Illinois Office of Water Resources, the Nevada Division of Water Resources, and the Texas Commission on Environmental Quality—to understand how state water regulators consider the impacts of power plant operators’ proposals to use water. In Texas, additional discussions were held with the Public Utility Commission of Texas; the Texas Water Development Board; the University of Texas; and the USGS Texas Water Science Center to further understand how water supply issues and energy demand are managed in Texas. In Alabama, we held additional discussions with officials from the Alabama Public Service Commission and the Alabama Department of Environmental Management to learn more about how Alabama’s state water regulators and power plant operators manage water supply and energy demand. In Nevada, we held a discussion with an official from the Public Utilities Commission of Nevada to determine how they evaluate cooling technologies and water issues in plant siting certification proceedings. We also contacted the Illinois Commerce Commission. Finally, to determine how useful federal water data are to experts and state regulators who evaluate power plant development proposals, we reviewed data and analysis from the Energy Information Administration (EIA), USGS, and the Department of Energy’s National Energy Technology Laboratory and analyzed how the data were being used. We also conducted interviews with federal agencies, including the Bureau of Reclamation; EIA; Environmental Protection Agency; Tennessee Valley Authority; U.S. Army Corps of Engineers; and USGS to understand whether each organization also collected water data and their opinions about the strengths and limitations of EIA and USGS data. We spoke with several regional offices for the Bureau of Reclamation, including the Lower Colorado and Mid-Pacific offices to understand federal water issues in California, Arizona, and Nevada. In addition, to understand how valuable federal water data are to experts and state regulators who evaluate power plant development proposals to use water, we conducted interviews and reviewed documents from state water regulators and public utility commissions, as well as water and electricity experts at environmental and water organizations, such as the Pacific Institute and Environmental Defense Fund; at universities such as the Georgia Institute of Technology; Southern Illinois University, Carbondale; and the University of Maryland, Baltimore County; and experts from industry, national laboratories, and other organizations and universities previously mentioned. We also contacted other electricity groups, including the North American Electric Reliability Corporation and the National Association of Regulatory Utility Commissioners, to get a broader understanding of how the electricity industry addresses water supply issues. We conducted this performance audit from October 2008 through October 2009, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Arizona, with a population of 6.5 million, was the 16th most populous state in the country in 2008 and was one of the fastest growing states, growing at a rate of 2.3 percent from 2007 to 2008. Most of the land in Arizona is relatively dry, therefore, water for electricity production is limited. For 2007, Arizona accounted for 2.7 percent of U.S. net electricity generation, ranking it 13th, with most generation coming from coal (36 percent); natural gas (34 percent); nuclear (24 percent); and renewable sources, such as hydroelectric (6 percent), although the state has a strong interest in developing solar and other renewable sources. Arizona relies on three water sources for electricity production: (1) surface water, including the Colorado River; (2) groundwater; and (3) effluent. Arizona water law varies depending on the source and the user’s location, specifically: Surface water. The use of surface water in Arizona is determined by the doctrine of prior appropriation. The Arizona Department of Water Resources issues permits to use surface water statewide, with the exception of water from the Colorado River. The federal government developed water storage and distribution via a series of canals to divert water from the Colorado River to southern Arizona, and the Bureau of Reclamation issues contracts for any new water entitlements related to Colorado River water, in consultation with the Arizona Department of Water Resources. Groundwater. The use of groundwater depends on its location. Because some areas receive seasonal rain and snow, average annual precipitation can vary by location, from 3 to over 36 inches of moisture. The state established five regions where groundwater is most limited known as Active Management Areas. Permits to use groundwater in these five areas are coordinated through the Arizona Department of Water Resources, which provides several permitting options for power plants. Outside Active Management Areas, the state subjects groundwater to little regulation or monitoring and generally only requires users to submit a well application to the Department of Water Resources. Effluent. Effluent is owned by the entity that generates it until it is discharged into a surface water channel. The owner has the right to put effluent to beneficial use or convey it to another entity, such as a power plant, that will put it to beneficial use. However, once it is discharged from the pipe, generally into a surface water body, such as a river, it is considered abandoned and subject to laws governing surface water. Arizona has no overall statewide policy on the use of water in thermoelectric power plants. However, in Active Management Areas, the state requires developers of newer power plants with a generating capacity of 25 megawatts or larger to use groundwater in a wet recirculating system with a cooling tower and to cycle water through the cooling loop at least 15 times before discharging it. An official of an Arizona public utility noted that it was more common to cycle water 3 to 7 times outside of Active Management Areas. Before a power plant developer can begin constructing a power plant with a generating capacity of 100 megawatts or larger, it must go through a two- step certification process and a permitting process, as follows: The first step of the certification process involves public hearings before the Arizona Power Plant and Transmission Line Siting Committee, made up of representatives from five state agencies and six additional members appointed by the Arizona Corporation Commission. Although the Line Siting Committee is not required to evaluate water use unless the plant will be located within an Active Management Area, it typically considers water rights, water availability for the life of the power plant, and the environmental effects of groundwater pumping around the plant. Committee members told us they often ask about the planned water sources and whether alternative water sources and cooling technologies are available. If the plant will be located within an Active Management Area, a representative of the Department of Water Resources serving on the Committee takes the lead in evaluating the plant’s potential adverse impacts on the water source, including reviewing state data or U.S. Geological Survey (USGS) studies that document the status and health of the proposed water source. A representative from the Arizona Department of Environmental Quality serving on the Committee considers the plant’s potential adverse effects on water quality. Based on this information, as well as the proposed plant’s feasibility and its potential environmental and economic impacts, the Committee issues a recommended Certificate of Environmental Compatibility, if appropriate. In the second step of the certification process, the Arizona Corporation Commission reviews the power plant developer’s application to ensure there is a balance between the state’s need for energy and the plant’s cost and potential environmental impacts, including water quality, water supply, ecological, and wetlands impacts. The Commission can accept, deny, or modify the Certificate of Environmental Compatibility that was recommended by the Line Siting Committee and has denied some certificates. The Commission places the burden on the applicant to demonstrate that the proposed water supply is sustainable and how any water quality impacts will be mitigated. The Commission does not collect or review additional water data or conduct quality checks on the data provided by the power plant developers. The permitting process applies to both water supply and water quality. With respect to water supply, when required, power plant developers who plan to use surface water in most areas of the state or groundwater in an Active Management Area must obtain a water use permit from the Department of Water Resources. When applying for a permit, power plant developers are required to provide information on the amount of water they will use, the source, points of diversion and release, and how the power they generate will be used. For groundwater in an Active Management Area, users are strictly limited to a total volume of water permitted for withdrawal and are subject to annual reporting and an analysis of the impact on other wells. According to an official at the Department of Water Resources, the Department has extensive data on available groundwater for each Active Management Area to assist in determining the effects of groundwater use. With respect to water quality, power plant developers must obtain permits which regulate water quality through the Department of Environmental Quality. Further, power plants discharging into federally-regulated waters also need a National Pollutant Discharge Elimination System permit that covers effluent limitations and sets discharge requirements. This program is intended to ensure that discharges to surface waters do not adversely affect the quality and beneficial uses of such water. Between January 2004 and July 2009, Arizona has approved three new power plants, two of which are simple cycle natural gas plants that do not need water for cooling. The third plant is a concentrating solar thermal plant using a wet recirculating system with cooling towers. According to an official from the Arizona Department or Water Resources, once the plant begins operating, it will use 3,000 acre feet of water annually from groundwater and surface water, under contract from an Irrigation District. Between 1999 and 2002, a large number of applications for power plants in Arizona were filed, most of which were approved. However, at least one plant was denied a Certificate of Environmental Compatibility due to a water supply concern—the potential loss of habitat for an endangered species from possible groundwater depletion. Approved plants used a variety of water sources for cooling, including recycled wastewater, surface water through arrangements with the Central Arizona Project, and groundwater––both directly used or from conversion of agricultural land. No dry cooled power plants have been approved in Arizona, according to state officials. State officials told us dry cooling is too inefficient and costly, but that it may be considered in the future if water shortages become more acute. As of January 2009, California had the nation’s largest population—an estimated 38.3 million people—and grew at a rate of 1.1 percent annually from 2008 to 2009. California has significant variations in water availability, with a long coastline; several large rivers, particularly in the north; mountainous areas that receive substantial snowfall; and arid regions, particularly the Mojave Desert in southeastern California. Statewide, California averages 21.4 inches of rain annually, but has suffered significant droughts for the past three years. For 2007, California accounted for 5.1 percent of U.S. net electricity generation, ranking it 4th nationally. California generates electricity primarily from natural gas (55 percent); nuclear (17 percent); and renewable energy sources––primarily hydroelectric, wind, solar, and geothermal (25 percent). California imports 27 percent of its electricity from other states. California water law depends on whether the water is surface water or groundwater, specifically: Surface water. The use of surface water is subject to both the riparian and appropriative rights doctrines. No permit is needed to act upon riparian surface water rights, which result from ownership of land bordering a water source, and are senior to most appropriative rights. Appropriative rights, on the other hand, must be acquired through the State Water Resources Control Board. Applicants for appropriative rights must show, among other things, that the water will be put to beneficial use. Groundwater. The majority of California’s groundwater is unregulated. Additionally, California does not have a comprehensive groundwater permit process in place, except for groundwater that flows through subterranean streams, which is permitted by the State Water Resources Control Board. California has several policies that directly and indirectly address how thermoelectric power plants can use water. Specifically: California’s State Water Resources Control Board, as the designated state water pollution control agency and issuer of surface water rights, established a policy in 1975 that states that the use of fresh inland waters for power plant cooling will only be approved when it is demonstrated that the use of other water supply sources or other methods of cooling would be environmentally undesirable or economically unsound. Freshwater should be considered the last resort for power plant cooling in California. Since that time, according to officials we spoke with, the Board has encouraged the use of alternative sources of cooling water and alternative cooling technologies. The California Energy Commission (CEC), the state’s principal energy policy and planning organization, in 2003, reiterated the 1975 policy and further required developers to consider whether zero-liquid discharge technologies should be used to reduce water use unless it can be shown that the use of these technologies would be environmentally undesirable or economically unsound. Under these policies, dry cooling and use of alternative water for cooling would be the preferred alternatives. The State Water Resources Control Board discourages the use of once- through cooling in power plants due to potential harm to aquatic organisms. The agency is considering a state policy to require power plants using this technology to begin using other cooling technologies or retire from service. California has a centralized permitting process for new large power plants, including thermoelectric power plants. Developers constructing new power plants with a generating capacity of 50 megawatts or larger must apply for certification with the CEC, the lead state agency for ensuring proposed plants meet requirements of the California Environmental Quality Act and generally overseeing the siting of new power plants. The CEC coordinates review of other state environmental agencies, such as the State Water Resources Control Board and issues all required state permits (air permits, water permits, etc.). Prior to issuing the permits needed to construct a new power plant, the CEC conducts an independent assessment, with public participation, of each proposed plant’s environmental impacts; public health and safety impacts; and compliance with federal, state, and local laws, ordinances, and regulations. As part of its review, CEC staff analyze the effect on other water users of power plant developers’ proposed use of water for cooling and other purposes, access to needed water supplies throughout the life of the plant, and the plant’s impact on the proposed water source and the state’s water supply overall. The CEC also ensures power plant developers have obtained the required water supply agreements; analyzed the feasibility of alternative water sources and cooling technologies; and addressed water supply, water quality, and wastewater disposal impacts. The CEC may require implementation of various measures to mitigate the impacts of water use, if it identifies problems. The CEC’s goal is to complete the entire certification process in 12 months, but public objections, incomplete application submittals, staff shortages, and limited budgets sometimes delay the process. The CEC evaluates several sources of water data before certifying plant applicants’ water use. These include: the developer’s proposals; data from the Department of Water Resources’ groundwater database on water availability and water quality; U.S. Geological Survey data on water availability through its streamflow and groundwater monitoring programs and any specific basin studies; the State Water Resources Control Board’s information on surface and groundwater quality; and computer groundwater models that analyze the long-term yield of the basin. With respect to water quality, the CEC coordinates the issuance of permits relating to water quality for new power plants, but the State Water Resources Control Board sets overall state policy. The Board operates under authority delegated to it by the U.S. Environmental Protection Agency to implement certain federal laws, including the Clean Water Act, as well as authority provided under state laws designed to protect water quality and ensure that the state’s water is put to beneficial uses. Nine Regional Water Boards are delegated responsibility for implementing the statewide water quality control plans and policies, including setting discharge requirements for permits for the National Pollutant Discharge Elimination System Program and issuing the permits. Since 2004, most power plants the CEC has approved or is currently reviewing plan to use dry cooling or a wet recirculating system that uses an alternative water source, as shown in table 6. According to a state official we spoke with, no plants approved to be built in the last 25 years have used once-through cooling technology. Over the last 7 years, the CEC has also commissioned, or been involved in, substantial research into the use and possible effects of using alternative cooling technologies. In 2008, Georgia ranked 9th in population among states, with 9.7 million people, and had the 4th fastest growing population in the U.S. between the years 2000 and 2007. Georgia is historically water rich, receiving approximately 51 inches of precipitation annually, but recent droughts and growing population have prompted additional focus on water supply and management strategies. Georgia ranked 8th in total net electricity generation in 2007, accounting for approximately 3.5 percent of net electricity generation in the United States. Coal and nuclear power are the primary fuel sources for electricity in Georgia, with coal-fired power plants providing more than 60 percent of electricity output. Georgia is a regulated riparian state, meaning that the owners of land adjacent to a water body can choose when, where, and how to use the water. The use must be considered reasonable relative to a competing user, with the courts responsible for resolving disputes about reasonable use. Since the late 1970s, Georgia law has required any water user who withdraws more than an average of 100,000 gallons per day to obtain a withdrawal permit from the Georgia Environmental Protection Division. Georgia does not have a policy or guidance specifically addressing thermoelectric power plants’ water use. However, in response to recent droughts and population growth, the state adopted its first statewide water management plan in 2008. State water regulators we spoke with said they expect the new state water plan to consider how future power generation siting decisions align with state water supplies. Before power plant developers can begin construction, they may be required to obtain certification from the Georgia Public Service Commission and relevant permits from offices such as the Georgia Environmental Protection Division, as follows: Georgia Public Service Commission. Georgia Power Company, the state’s investor-owned utility, is fully regulated by the Public Service Commission and must obtain a certificate of public convenience and necessity prior to constructing new power plants. Other power plant developers, including municipality- and cooperatively-owned power plants and others, are not subject to certification. Public Service Commission officials explained that during the certification process, they balance the need for the new plant and its costs, but they do not consider the impact a plant will have on Georgia’s water supply. However, these officials explained that, in their capacity to ensure utilities charge just and reasonable rates, they could consider the economic impact of using an alternative water source or advanced cooling technology, should a plant propose to use one. Georgia Environmental Protection Division. Any entity seeking to use more than 100,000 gallons of water per day, including power plant developers, must obtain a permit from the Georgia Environmental Protection Division. The Division analyzes the proposed quantity of withdrawals and the water source and determines whether the withdrawal amounts and potential effects for downstream water users are acceptable. In some instances, the Division may place special conditions on power plants to ensure adequate water availability, such as requiring on-site reservoirs or groundwater withdrawals for water use during droughts. In making their decisions, the Georgia Environmental Protection Division reviews the plant’s application and hydrologic data from a number of sources. Water withdrawal applications include many factors, in addition to withdrawal amounts and sources, such as water conservation and drought contingency plans; documentation of growth in water demand, location, and purpose of water withdrawn or diverted; and annual consumption estimates. Other data sources include their own and U.S. Geological Survey (USGS) groundwater data, USGS streamflow data, and existing water use permits. In some instances, the Environmental Protection Division may also use water withdrawal and water quality data collected by the U.S. Army Corps of Engineers if an applicant is downstream of federally-regulated waters. In addition to permitting water use, the Division is also responsible for issuing and enforcing all state permits involving water quality impacts. It is authorized by the Environmental Protection Agency to issue National Pollutant Discharge Elimination System permits that address discharge limits and reporting requirements. According to Division officials, the Division has never denied a water withdrawal permit to a power plant developer on the basis of insufficient water, which they attributed partly to the fact that the staff meets with applicants numerous times before they submit the application to identify and mitigate concerns about water availability. Moreover, they told us that thermoelectric power plant developers have submitted few applications for water withdrawal permits. For example, as shown in table 7, between January 1, 2004, and December 31, 2008, the Division received only 6 water withdrawal applications from thermoelectric power plant developers; of these, it approved 5. An official from the Public Service Commission was unaware of any regulated power plant developers proposing the use of advanced cooling technologies, such as dry cooling or hybrid cooling, over this time period. Georgia Environmental Protection Division officials told us they do not advocate or refuse the use of particular cooling technologies. However, officials said they do not expect to receive applications for once-through cooling plants because federal environmental regulations make the permitting process difficult. EIA forms are not designed to collect information on advanced cooling technologies. Understanding of trends in the adoption of advanced cooling technologies cannot be systematically determined using only EIA data. Cooling system codes: Codes used to classify plant cooling systems may be incomplete, lack explanation, overlap, or contain errors. Cooling system codes are not defined in detail and plants may be uncertain about what cooling system code to use. Inconsistent use of cooling tower codes could potentially make EIA data less valuable and lead to inaccurate or inconsistent data and analysis. Nuclear water data: Water use data (withdrawal, consumption and discharge) and cooling information were discontinued for nuclear plants in 2002. EIA discontinued reporting nuclear water use data and cooling system information due to priorities stemming from budget limitations. Data users must use noncurrent data or seek out an alternate source. If this limitation persists, water data will not be available for any new nuclear plants constructed. EIA and USGS Alternative water sources: It is not possible to comprehensively identify power plants using alternative water sources. EIA forms are not designed to collect information on alternative water sources. According to USGS, budget constraints have limited the amount of water use information the agency can provide. Understanding trends in power plant adoption of alternative water sources is limited. EIA and USGS Frequency: EIA reports data on annual water use, rather than data on water use over shorter time periods, such as monthly. USGS reports 5- year data. EIA’s form 767, used to collect cooling system and water data, was developed and revised in the 1980s, and EIA officials we spoke with were not aware of why an annual time period was originally chosen. According to USGS, budget constraints have limited the amount of water use information the agency can provide. Seasonal trends in water use by power plants are not evident from annual EIA or 5-year USGS data. EIA and USGS Quality: Reporting of some EIA data elements may be inaccurate or inconsistent. USGS data are compiled from many different data sources, and the accuracy and methodology of these sources may vary. Furthermore, USGS state offices have different methods for developing water use estimates, potentially contributing to data inconsistency. Respondents may use different methods to measure or estimate data and instructions may be limited or unclear. Respondents may make mistakes or have nontechnical staff fill out surveys, since EIA’s form for collecting this data does not require technical staff to complete the survey. According to USGS, budget constraints in its water use program kept the agency from implementing improvements it would like to make to its quality control of water use data. Inaccurate and inconsistent data are more challenging to analyze and less relevant for policymakers, water experts and the public seeking to understand water use patterns. Consumption: USGS discontinued reporting of thermoelectric power plant and other water consumption data. According to USGS, budget constraints have caused the agency to make cuts in data reporting. Understanding of trends in power plant water consumption compared to other industries is limited. Analysis to compare thermoelectric power plant withdrawals to consumption is more complicated. Hydrologic code: USGS discontinued reporting thermoelectric power plant and other water use by hydrologic code. It now only reports data by county. According to USGS, budget constraints have caused the agency to make cuts in data reporting. According to some data users, not having data by hydrologic code complicates water analysis, which is often performed by watershed rather than county. Timeliness: Data are reported many years late. For example, data on 2005 water use have not yet been made available to the public. According to USGS, budget constraints have led to limited staff availability for water use data collection and analysis, resulting in reporting delays. Data are outdated and may be less relevant for analysis. In addition to the individuals named above, Jon Ludwigson (Assistant Director), Scott Clayton, Philip Farah, Paige Gilbreath, Randy Jones, Alison O’Neill, Timothy Persons, Kim Raheb, Barbara Timmerman, Walter Vance, and Jimi Yerokun made key contributions to this report.
In 2000, thermoelectric power plants accounted for 39 percent of total U.S. freshwater withdrawals. Traditionally, power plants have withdrawn water from rivers and other water sources to cool the steam used to produce electricity, so that it may be reused to produce more electricity. Some of this water is consumed, and some is discharged back to a water source. In the context of growing demands for both water and electricity, this report discusses (1) approaches to reduce freshwater use by power plants and their drawbacks, (2) states' consideration of water use when reviewing proposals to build power plants, and (3) the usefulness of federal water data to experts and state regulators. GAO reviewed federal water data and studies on cooling technologies. GAO interviewed federal officials, as well as officials from seven selected states. Advanced cooling technologies that rely on air to cool part or all of the steam used in generating electricity and alternative water sources such as treated effluent can reduce freshwater use by thermoelectric power plants. Use of such approaches may lead to environmental benefits from reduced freshwater use, as well as increase developer flexibility in locating a plant. However, these approaches also present certain drawbacks. For example, the use of advanced cooling technologies may result in energy production penalties and higher costs. Similarly, the use of alternative water sources may result in adverse effects on cooling equipment or regulatory compliance issues. Power plant developers must weigh these drawbacks with the benefits of reduced freshwater use when determining which approaches to pursue. Consideration of water use by proposed power plants varies in the states GAO contacted, but the extent of state oversight is influenced by state water laws, related state regulatory policies, and additional layers of state regulatory review. For example, California and Arizona--states that historically faced constrained water supplies, have taken formal steps aimed at minimizing freshwater use at power plants. In contrast, officials in five other states GAO contacted said that their states had not developed official policies regarding water use by power plants and, in some cases, did not require a state permit for water use by new power plants. Federal agencies collect national data on water availability and water use; however, of these data, state water agencies rely on federal water availability data when evaluating power plants' proposals to use freshwater more than federal water use data. Water availability data are collected by the U.S. Geological Survey (USGS) through stream flow gauges, groundwater studies, and monitoring stations. In contrast, federal data on water use are primarily used by experts, federal agencies, and others to identify industry trends. However, these data users identified limitations with the federal water use data that make them less useful for conducting trend analyses and tracking industry changes. For example, the Department of Energy's (DOE) Energy Information Administration (EIA) does not systematically collect information on the use of advanced cooling technologies and other data it collects are incomplete. Similarly, USGS discontinued distribution of data on water consumption by power plants and now only provides information on water withdrawals. Finally, neither EIA nor USGS collect data on power plant developers' use of alternative water sources, which some experts believe is a growing trend in the industry. Because federal data sources are a primary source of national data on water use by various sectors, data users told GAO that without improvements to these data, it becomes more difficult for them to conduct comprehensive analyses of industry trends and limits understanding of changes in the industry.
Scientific research and projections of the changes taking place in the Arctic vary, but there is a general consensus that Arctic sea ice is diminishing and some scientists have projected that the Arctic will be ice- diminished for periods of time in the summer by as soon as 2040.recently as September 2011, scientists at the U.S. National Snow and Ice Data Center reported that the annual Arctic minimum sea ice extent for 2011 was the second lowest in the satellite record, and 938,000 square miles less than the 1979 to 2000 average annual minimum. These environmental changes in the Arctic are making maritime transit more feasible and are increasing the likelihood of human activity including tourism, oil and gas extraction, commercial shipping, and fishing in the As region.characteristics still provide challenges to surface navigation in the Arctic, including large amounts of winter ice and increased movement of ice from spring to fall. Increased movement of sea ice makes its location less predictable, which is likely to increase the risk for ships to become trapped or damaged by ice impacts. As we reported in September 2010, the Coast Guard faces challenges to Arctic operations including limited maritime domain awareness, assets, and infrastructure. In a 2008 report to Congress, the Coast Guard stated that maritime domain awareness in the Arctic is critical to effective engagement in the Arctic as activity increases. However, several factors—including (1) inadequate Arctic Ocean and weather data, (2) lack of communication infrastructure, (3) limited intelligence information, and (4) lack of a physical presence in the Arctic—create challenges for the Coast Guard in achieving maritime domain awareness in the Arctic. The Coast Guard also faces limitations in assets and infrastructure in the Arctic. These include (1) an inadequate portfolio of small boats for Arctic operations, (2) the environmental impact of Arctic conditions on helicopters and airplanes, and (3) a lack of cutter resources for Arctic patrols. The Coast Guard has taken a variety of actions to identify its Arctic requirements. As we reported in September 2010, these encompass a range of efforts including both routine mission operations and other actions specifically intended to help identify Arctic requirements. Through routine mission operations, the Coast Guard has been able to collect useful information on the capability of its existing assets to operate in cold climates, strategies for overcoming logistical challenges presented by long-distance responses to incidents, and the resources needed to respond to an oil spill in a remote and cold location, among other things. We also reported that the Coast Guard had efforts underway specifically designed to inform its Arctic requirements, including the establishment of seasonal, temporary operating locations in the Arctic and biweekly Arctic overflights. The temporary operating locations were established during the summers of 2008 through 2010, and have helped the Coast Guard identify performance requirements and obstacles associated with the deployment of small boats, aircraft, and support staff above the Arctic Circle. The seasonal (March-November) biweekly Arctic overflights were initiated in October 2007 to increase the agency’s maritime domain awareness, test personnel and equipment capabilities in the Arctic, and inform the Coast Guard’s Arctic requirements, among other things. As we reported in September 2010, these efforts addressed elements of three key practices for agencies to better define mission requirements and desired outcomes: (1) assessing the environment; (2) involving stakeholders; and (3) aligning activities, core processes, and resources. The Coast Guard’s primary analytical effort to identify and report on Arctic requirements, the High Latitude Study (the Study), identifies the Coast Guard’s responsibilities in the Polar regions, discusses the nature of the activities it must perform over the next 30 years, and concludes with a high-level summary of the Coast Guard’s material and nonmaterial needs to meet the requirements. Specifically, the Study identifies the Coast Guard’s current capability gaps in the Arctic and assesses the degree to which these gaps will impact future missions. Of the Coast Guard’s 11 mission areas, 9 are expected to experience future demand in the Arctic region. The Study identifies several current capability gaps that affect the majority of these mission areas. Specifically, gaps in communications capabilities affect all 9 mission areas, while deficiencies in the information available about sea ice coverage in the Arctic affects 8 mission areas.The other major gaps that affect the majority of mission areas are related to the lack of polar icebreaking capacity, which will be discussed later in this statement. Of the 9 mission areas that the Coast Guard will need to carry out in the Arctic, the Study identifies 7 mission areas expected to be significantly or moderately impacted by current capability gaps. In general, these missions all address the protection of important national interests in the Arctic or the safety of mariners and the environment. See appendix II for more detail about the degree of impact that current capability and capacity gaps are expected to have on future Coast Guard mission performance. The Study then identifies potential solutions to specifically address gaps in communications and electronic navigation capabilities, recommending that the Coast Guard acquire more than 25 additional communication or navigation facilities for Arctic operations. In addition to these capabilities, the Study compares six different options—identified as Arctic force mixes—to a baseline representing the Coast Guard’s current Arctic assets. These force mixes add assets to the existing baseline force mix, and contain different combinations of cutters (including icebreakers), aircraft, and forward operating locations and are designed to mitigate the mission impacts caused by current capability gaps. See appendix III for a description of the assets included in each Arctic force mix. The High Latitude Study also includes a risk analysis that compares the six Arctic force mixes in terms of the ability of each force mix to reduce the risk that is expected to exist in the future Arctic environment. Risk reduction is determined in part by (1) identifying a list of potential Arctic maritime incidents requiring Coast Guard support, such as maritime accidents resulting in multiple casualties or a major oil spill, or both; (2) quantifying the likelihood that these search and rescue and maritime environmental protection incidents could occur and the resulting impact should they occur; and (3) assessing the relative effectiveness, or risk reduction, of force packages the Coast Guard may employ to respond to those incidents. The intent of the analysis is to provide information on risk-reduction alternatives to inform the acquisition process. According to the Study, the baseline Arctic force mix reduces less than 1 percent of risk in the Arctic because this patrol capability cannot reasonably respond to northern area incidents, while the six other Arctic force mixes reduce between 25 and 92 percent of risk annually, though the amount of risk reduced varies by season. See appendix III for the amount of annual risk in the Arctic reduced by each force mix. As we reported in September 2010, administration budget projections indicated that DHS’s annual budget was expected to remain constant or decrease over the next 10 years. Moreover, senior Coast Guard officials, based in Alaska, reported that resources for Arctic operations had already been reduced and were inadequate to meet existing mission requirements in Alaska, let alone expanded Arctic operations. These officials also reported a more than 50 percent year-to-year reduction between 2005 and 2009 in the number of large cutters available for operations in their region. Officials also expressed concern that the replacement of the 12 older high-endurance cutters with 8 new cutters may exacerbate this challenge. Given the reductions that have already taken place, as well as the anticipated decrease in DHS’s annual budget, the long-term budget outlook for Coast Guard Arctic operations is uncertain. The challenge of addressing Arctic resource requirements in a flat or declining budget environment is further underscored by recent budget requests that have identified the Coast Guard’s top priority as the recapitalization of cutters, aircraft, communications, and infrastructure— particularly with regard to its Deepwater program. Recent budget requests also have not included funding for Arctic priorities, aside from the annual operating costs associated with existing icebreakers. This budget challenge is exacerbated when the costs of the High Latitude Study’s proposed resource requirements are taken into account. Specifically, the Study estimates that the cost of acquiring the assets associated with each of the six Arctic force mixes would range from $1.01 billion to $6.08 billion, and their corresponding annual operating costs would range from $72.3 million to $411.3 million. See appendix III for the estimated acquisition cost of each Arctic force mix. Additionally, the estimated cost for the recommended communications and electronic navigation capabilities for Arctic operations is about $23.4 million. Given current budget uncertainty and the Coast Guard’s recent acquisition priorities, it may be a significant challenge for the Coast Guard to acquire the assets that the Study recommends. The most significant issue facing the Coast Guard’s icebreaker fleet is the growing obsolescence of these vessels and the resulting capability gap caused by their increasingly limited operations. As we noted in our 2010 report, Coast Guard officials reported challenges fulfilling the agency’s statutory icebreaking mission, let alone its standing commitment to use the icebreakers to support the Navy as needed. Since then, at least three reports have further identified the Coast Guard’s challenges to meeting its current and future icebreaking mission requirements in the Arctic with its existing polar icebreaker fleet, as well as the challenges it faces to acquire new icebreakers. The Coast Guard’s existing fleet includes three icebreakers that are capable of operating in the Arctic: Polar Sea (inoperative since 2010): The Polar Sea is a heavy commissioned in 1978 with an expected 30-year icebreaker lifespan. A major service life extension project, completed in 2006, was expected to extend the Polar Sea’s service life through 2014. However, in 2010, the Polar Sea experienced major engine problems and is now expected to be decommissioned in 2011. According to a Coast Guard budget official, this will allow its resources to be redirected toward the ongoing service life extension of the Polar Star. Fig. 2 below shows the Polar Sea in dry dock. Polar Star (inoperative since 2006): The Polar Star is a heavy icebreaker commissioned in 1976 with an expected 30-year lifespan. The Polar Star is currently undergoing a $62.8 million service life extension, and is expected to return to service in 2013. The ongoing service life extension is expected to extend the Polar Star’s service life through at least 2020. Healy (operative): The Healy is a medium icebreaker, commissioned in 2000, with an expected 30-year lifespan. The Healy is less capable than the heavy icebreakers and is primarily used for scientific missions in the Arctic. As a medium icebreaker, the Healy does not have the same icebreaking capabilities as the Polar Sea and Polar Star. Because of this, it cannot operate independently in the ice conditions in the Antarctic or ensure timely access to some Arctic areas in the winter. ABS Consulting, U.S. Polar Icebreaker Recapitalization: A Comprehensive Analysis and Its Impacts on U.S. Coast Guard Activities, prepared for the United States Coast Guard, (October 2011). icebreakers (Recapitalization report), which assessed options for recapitalizing its existing icebreaker fleet, including building new icebreakers, or reconstructing the Polar Sea and Polar Star to meet mission requirements, among other options. This October 2011 report found that the most cost-effective option would be to build two new heavy icebreakers, while performing minimal maintenance to keep the existing icebreakers operational while construction is taking place. In addition to having the lowest acquisition cost of any option—at $2.12 billion—this option also has the lowest risk due to the complexity (and therefore risk) associated with the other options of performing major service life extensions or reconstructing the Polar Sea and Polar Star. The risk associated with these options is driven by high levels of uncertainty in terms of cost, scheduling, and technical feasibility for reconstructing the existing fleet. Given the time frames associated with building new icebreakers, the Recapitalization report concluded that the Coast Guard must begin planning and budgeting immediately. High Latitude Study. ABS Consulting, High Latitude Study Mission Analysis Report. medium icebreakers). The Study does provide cost estimates for acquiring the recommended icebreakers, but it does not directly assess the feasibility of its recommendations. As mentioned above, the Coast Guard faces budget uncertainty and it may be a significant challenge for the Coast Guard to obtain Arctic capabilities, including icebreakers. Given our analysis of the challenges that the Coast Guard already faces in funding its existing acquisition programs, it is unlikely that the agency’s budget could accommodate the level of additional funding (estimated by the High Latitude Study to range from $4.14 billion to $6.9 billion) needed to acquire new icebreakers or reconstruct existing ones. The Recapitalization report similarly concludes that the recapitalization of the polar icebreaker fleet cannot be funded within the existing or projected Coast Guard budget. All three reports reviewed alternative financing options, including the potential for leasing icebreakers, or funding icebreakers through the NSF or DOD. The Recapitalization report noted that a funding approach similar to the approach used for the Healy, which was funded through the fiscal year 1990 DOD appropriations, should be considered.Guard has a more immediate need than DOD to acquire Arctic capabilities, including icebreakers, making it unlikely that a similar funding approach would be feasible at this time. For more details on Coast Guard funding challenges and options specific to icebreakers, see appendix IV. The Coast Guard continues to coordinate with various stakeholders on Arctic operations and policy, including foreign, state, and local governments, Alaskan Native governments and interest groups, and the private sector. In September 2010, we reported that the Coast Guard has been actively involved in both bilateral and multilateral coordination efforts such as the Arctic Council. The Coast Guard also coordinates with state, local, and Alaskan Native governments and interest groups; however, some of these stakeholders reported that they lack information on both the Coast Guard’s ongoing planning efforts and future approach in the Arctic. In response to these concerns, in 2010 we recommended that the Commandant of the Coast Guard ensure that the agency communicates with these stakeholders on the process and progress of its Arctic planning efforts. The Coast Guard agreed with our recommendation and is in the process of taking corrective action. For example, in April 2011, the Coast Guard issued a Commandant Instruction that emphasizes the need to enhance partnerships with Arctic stakeholders. Additionally, in August 2011, the Commandant participated in a field hearing in Alaska which included discussion about the Coast Guard’s Arctic capability requirements. The Coast Guard also coordinates with federal agencies, such as the NSF, National Oceanic and Atmospheric Administration (NOAA), and DOD, and is involved with several interagency coordination efforts that address aspects of key practices we have previously identified to help enhance and sustain collaboration among federal agencies. For example, as discussed above, the Coast Guard collaborates with the NSF to manage the nation’s icebreaker fleet, including scheduling icebreaker time for research activities, while NOAA provides the Coast Guard with weather forecasts and warnings, as well as information about ice concentration and type. Additionally, the Coast Guard is involved with interagency efforts such as the Interagency Policy Committee on the Arctic, created in March 2010 to coordinate governmentwide implementation of National Security Presidential Directive 66 / Homeland Security Presidential Directive 25. Since our September 2010 report, the Coast Guard has partnered with DOD on another interagency coordination effort, the Capabilities Assessment Working Group. DHS and DOD established the working group in May 2011 to identify shared Arctic capability gaps as well as opportunities and approaches to overcome them, to include making recommendations for near-term investments. DHS assigned the Coast Guard lead responsibility for the working group, which was directed to focus on four primary capability areas when identifying potential collaborative efforts to enhance Arctic capabilities, including near-term investments. Those capability areas include maritime domain awareness, communications, infrastructure, and presence. The working group was also directed to identify overlaps and redundancies in established and emerging DOD and DHS Arctic requirements. This working group will address several of the key practices we have identified—articulating a common outcome; identifying and addressing needs by leveraging resources; and reinforcing agency accountability for the effort through a jointly developed report containing near-term investment recommendations. The establishment of the working group helps to ensure that collaboration between the Coast Guard and DOD is taking place to address near-term capabilities in support of current planning and operations; however, upon the completion of the report in January 2012, the working group is expected to be dissolved. GAO is also conducting an ongoing review of DOD’s May 2011 Report to Congress on Arctic Operations and the Northwest Passage that was directed by the House Committee on Armed Services and will report on our results in January of next year. That report will assess the extent to which DOD’s Arctic Report addressed congressional requirements and DOD’s efforts to identify and prioritize the capabilities needed to meet national security objectives in the Arctic, including through collaboration with the Coast Guard. Chairman LoBiondo, Ranking Member Larsen, and Members of the Subcommittee, this completes my prepared statement. I would be happy to respond to any questions you may have at this time. For information about this statement please contact Stephen L. Caldwell, Director, Homeland Security and Justice, at (202) 512-9610, or [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this statement. Other individuals making key contributions to this testimony include Dawn Hoff (Assistant Director), Elizabeth Kowalewski (Analyst-In- Charge), Christopher Currie, Katherine Davis, Geoffrey Hamilton, Adam Hoffman, John Pendleton, Timothy Persons, Steven Putansu, Jodie Sandel, David Schmitt, Amie Steele, Esther Toledo, and Suzanne Wren. This appendix provides a map of the Arctic boundary, as defined by the Arctic Research and Policy Act. As discussed in the report, the Coast Guard currently has limited capacity to operate in the waters immediately below the Arctic Circle, such as the Bering Sea. Increasing responsibilities in an even larger geographic area, especially in the harsh and remote conditions of the northern Arctic, will further stretch the agency’s capacity. This appendix provides information on the degree to which the Coast Guard’s existing capability gaps in the Arctic are expected to impact future mission performance. Of the Coast Guard’s 11 mission areas, 9 are expected to experience future demand in the Arctic, and the degree to which existing capability gaps are expected to impact these missions has been classified as Significant, Moderate, or Low. Examples of how these gaps are expected to impact each mission are also included below. This appendix provides information on potential solutions to the Coast Guard’s existing capability gaps in the Arctic. The High Latitude Study compares six Arctic force mixes in terms of the ability of each force mix to reduce the risk that is expected to exist in the future Arctic environment. The force mixes add assets to the baseline force mix (which represents the Coast Guard’s current Arctic assets) and include different combinations of cutters (including icebreakers), aircraft, and forward operating locations. The specific asset combinations for each force mix are described below. The estimated acquisition cost for each Arctic force mix and the percent of risk the force mix is expected to reduce in the Arctic is also shown below. This appendix provides an overview of the funding challenges the Coast Guard faces related to icebreakers. These include limitations in the Coast Guard’s existing and projected budget, as well as alternative financing options. The Coast Guard faces overall budget uncertainty, and it may be a significant challenge for the Coast Guard to obtain Arctic capable resources, including icebreakers. For more than 10 years, we have noted Coast Guard difficulties in funding major acquisitions, particularly when acquiring multiple assets at the same time. For example, in our 1998 report on the Deepwater program, we noted that the agency could face major obstacles in proceeding with that program because it would consume virtually all of the Coast Guard’s projected capital spending. In our 2008 testimony on the Coast Guard budget, we again noted that affordability of the Deepwater acquisitions would continue to be a major challenge to the Coast Guard given the other demands upon the agency for both capital and operations spending. In our 2010 testimony on the Coast Guard budget, we noted that maintaining the Deepwater acquisition program was the Coast Guard’s top budget priority, but would come at a cost to operational capabilities. This situation, of the Deepwater program crowding out other demands, continued, and in our report of July this year we noted that the Deepwater program of record was not achievable given projected Coast Guard budgets. Given the challenges that the Coast Guard already faces in funding its Deepwater acquisition program, it unlikely that the agency’s budget could accommodate the level of additional funding (estimated by the High Latitude Study to range from $4.14 billion to $6.9 billion) needed to acquire new icebreakers or reconstruct existing ones. The U.S. Polar Icebreaker Recapitalization Report contains an analysis of the Coast Guard’s budget which also concludes that the recapitalization of the polar icebreaker fleet cannot be funded within the existing or projected Coast Guard budget. This analysis examined the impact that financing a new polar icebreaker would have on Coast Guard operations and maintenance activities, among others. The report found that given the Coast Guard’s current and projected budgets, as well as its mandatory budget line items, there are insufficient funds in any one year to fully fund one new polar icebreaker. Additionally, though major acquisitions are usually funded over several years, the incremental funding obtained from reducing or delaying existing acquisition projects would have significant adverse impact on all Coast Guard activities. This means that it is unlikely that the Coast Guard will be able to expand the U.S. icebreaker fleet to meet its statutory requirements as identified the Commandant of by the High Latitude Study. As we reported in 2010, the Coast Guard has recognized these budgetary challenges, noting that the Coast Guard would need to prioritize resource allocations, while accepting risk in areas where resources would be lacking. Given that it takes 8-10 years to build an icebreaker, and the Coast Guard has not yet begun the formal acquisition process, the Coast Guard has already accepted some level of risk that its statutory mission requirements related to icebreakers will continue to go unmet. The three reports discussed earlier in this statement all identify funding as a central issue in addressing the existing and anticipated challenges related to icebreakers. In addition to the Coast Guard budget analysis included in the Recapitalization report, all three reports reviewed alternative financing options, including the potential for leasing icebreakers, or funding icebreakers through the National Science Foundation (NSF) or the Department of Defense (DOD). Although DOD has used leases and charters in the past when procurement funding levels were insufficient to address mission requirements and capabilities, both the Recapitalization report and the High Latitude Study determined that the lack of existing domestic commercial vessels capable of meeting the Coast Guard’s mission requirements reduces the availability of leasing options for the Coast Guard. Additionally, an initial cost-benefit analysis of one type of available leasing option included in the Recapitalization report and the High Latitude Study suggests that it may ultimately be more costly to the Coast Guard over the 30-year icebreaker lifespan. Another alternative option addressed by the Recapitalization report would be to fund new icebreakers through the NSF. However, the analysis of this option concluded that funding a new icebreaker through the existing NSF budget would have significant adverse impacts on NSF operations and that the capability needed for Coast Guard requirements would exceed that needed by the NSF. Coast Guard: Action Needed As Approved Deepwater Program Remains Unachievable, GAO-11-743, Washington, D.C.: July 28, 2011. Coast Guard: Efforts to Identify Arctic Requirements Are Ongoing, but More Communication about Agency Planning Efforts Would Be Beneficial, GAO-10-870, Washington, D.C.: September 15, 2010. Coast Guard: Observations on the Requested Fiscal Year 2011 Budget, Past Performance, and Current Challenges, GAO-10-411T, Washington, D.C.: February 25, 2010. Coast Guard: Observations on the Fiscal Year 2010 Budget and Related Performance and Management Challenges, GAO-09-810T, Washington, D.C.: July 7, 2009. Homeland Security: Enhanced National Guard Readiness for Civil Support Missions May Depend on DOD’s Implementation of the 2008 National Defense Authorization Act, GAO-08-311, Washington, D.C.: April 16, 2008. Coast Guard: Observations on the Fiscal Year 2009 Budget, Recent Performance, and Related Challenges, GAO-08-494T, Washington, D.C.: March 6, 2008. Results-Oriented Government: Practices That Can Help Enhance and Sustain Collaboration among Federal Agencies, GAO-06-15. Washington, D.C.: October 21, 2005. Coast Guard Acquisition Management: Deepwater Project’s Justification and Affordability Need to Be Addressed More Thoroughly, GAO/RCED-99-6, Washington, D.C.: October 26, 1998. Executive Guide: Effectively Implementing the Government Performance and Results Act, GAO/GGD-96-118, Washington D.C.: June 1996. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The gradual retreat of polar sea ice, combined with an expected increase in human activity--shipping traffic, oil and gas exploration, and tourism in the Arctic region--has increased the strategic interest that the United States and other nations have in the Arctic. As a result, the U.S. Coast Guard, within the Department of Homeland Security (DHS), has responsibilities in the Arctic, which are expected to increase. This testimony provides an update of: (1) the extent to which the Coast Guard has taken actions to identify requirements for future Arctic operations; (2) issues related to the U.S. icebreaking fleet; and (3) the extent to which the Coast Guard is coordinating with stakeholders on Arctic issues. This statement is based on GAO-10-870 , issued in September 2010, and includes selected updates. For the selected updates, GAO analyzed Coast Guard, Department of Defense (DOD,) and other related documents on Arctic operations and capabilities. GAO also interviewed Coast Guard and DOD officials about efforts to identify Arctic requirements and coordinate with stakeholders. The Coast Guard has taken a variety of actions--from routine operations to a major analysis of mission needs in the polar regions--to identify its Arctic requirements. The routine operations have helped the Coast Guard to collect useful information on the capability of its existing assets to operate in cold climates and strategies for overcoming logistical challenges presented by long-distance responses to incidents, among other things. Other operational actions intended to help identify Arctic requirements include the establishment of temporary, seasonal operating locations in the Arctic and seasonal biweekly Arctic overflights, which have helped the Coast Guard to identify performance requirements and test personnel and equipment capabilities in the Arctic. The Coast Guard's primary analytical effort to identify Arctic requirements is the High Latitude Study, a multivolume analysis that is intended to, in part, identify the Coast Guard's current Arctic capability gaps and assess the degree to which these gaps will impact future missions. This study also identifies potential solutions to these gaps and compares six different options--identified as Arctic force mixes--to a baseline representing the Coast Guard's current Arctic assets. However, given current budget uncertainty and the Coast Guard's recent acquisition priorities, it may be a significant challenge for the agency to acquire the assets that the High Latitude Study recommends. The most significant issue facing the Coast Guard's icebreaker fleet is the growing obsolescence of these vessels and the resulting capability gap caused by their increasingly limited operations. In 2010, Coast Guard officials reported challenges fulfilling the agency's statutory icebreaking mission. Since then, at least three reports--by the DHS Inspector General and Coast Guard contractors--have further identified the Coast Guard's challenges to meeting its current and future icebreaking mission requirements in the Arctic with its existing polar icebreaker fleet. Prior GAO work and these reports also identify budgetary challenges the agency faces in acquiring new icebreakers. Given these issues and the current budgetary climate, it is unlikely that the Coast Guard will be able to fund the acquisition of new icebreakers through its own budget, or through alternative financing options. Thus, it is unlikely that the Coast Guard will be able to expand the U.S. icebreaker fleet to meet its statutory requirements, and it may be a significant challenge for it to just maintain its existing level of icebreaking capabilities due to its aging fleet. In 2010, GAO reported the Coast Guard coordinates with various stakeholders on Arctic operations and policy, including foreign, state, and local governments, Alaskan Native governments and interest groups, and the private sector. GAO also reported that the Coast Guard coordinates with federal agencies, such as the National Science Foundation, National Oceanic and Atmospheric Administration, and DOD. More recently, the Coast Guard has partnered with DOD through the Capabilities Assessment Working Group--an interagency coordination group established in May 2011--to identify shared Arctic capability gaps as well as opportunities and approaches to overcome them, to include making recommendations for near-term investments. The establishment of this group helps to ensure collaboration between the Coast Guard and DOD addresses near-term capabilities in support of current planning and operations. GAO is not making new recommendations in this statement. GAO previously recommended that the Coast Guard communicate with key stakeholders on the process and progress of its Arctic planning efforts. DHS concurred with this recommendation and is in the process of taking corrective action.
The GPD program is one of nine VA programs that specialize in serving homeless veterans. Six of these programs fall under the responsibility of the Veterans Health Administration, which obligated about $224 million in fiscal year 2006 for these programs as well as $1.2 billion for outreach and treatment of homeless veterans. Outreach is considered particularly important to locate and serve veterans living on the street and in temporary shelters who otherwise would not seek assistance. Treatment involves primary and specialty medical care, mental health care, and alcohol and drug abuse services for eligible homeless veterans. Three of the nine programs are run jointly or solely by the Veterans Benefits Administration that also serves homeless veterans as part of its broader mission to provide disability compensation and pensions to eligible veterans. Figure 1 illustrates some of the key programs and services for homeless veterans—including the GPD program that is the focus of this report—provided by VA. (App. II provides a general description of the eight programs not otherwise covered in this report.) The GPD program—-VA’s major transitional housing program for homeless veterans—-spent about $67 million in fiscal year 2005. It became VA’s largest program for homeless veterans after fiscal year 2002, when VA began to increase GPD program capacity and phase out national funding for the more costly contracted residential treatment—another of VA’s transitional housing programs. To operate the GPD program at the local level, nonprofit and public agencies compete for grants. The program provides two basic types of grants—capital grants to pay for the buildings that house homeless veterans and per diem grants for the day-to-day operational expenses. Capital grants cover up to 65 percent of housing acquisition, construction, or renovation costs and require that agencies receiving the grants cover the remaining costs through other funding sources. Generally, agencies that have received capital grants are considered for subsequent per diem grants, so that the VA investment can be realized and the buildings can provide operational beds. Per diem grants support the operations of about 300 GPD providers nationwide. The per diem grants pay a fixed dollar amount for each day an authorized bed is occupied by an eligible veteran up to the maximum number of beds allowed by the grant. Generally under this grant, VA does not pay for empty beds. VA makes payments after an agency has housed the veteran, on a cost reimbursement basis, and the agency may use the payments to offset operating costs, such as staff salaries and utilities. By law, the per diem reimbursement cannot exceed a fixed rate, which was $29.31 per person per day in 2006. Reimbursement may be lower for providers receiving funds for the same purpose from other sources. On a limited basis, special needs grants are available to cover the additional costs of serving women, frail elderly, terminally ill, or chronically mentally ill veterans. Although the primary focus of the GPD program is housing, grants may also be used for transport or to operate daytime service centers that do not provide overnight accommodations. According to VA, in fiscal year 2005, GPD grants supported about 75 vans that were used to conduct outreach and transport homeless veterans to medical and other appointments. Also, 23 service centers were operating with GPD support. Most GPD providers have 50 or fewer beds available for homeless veterans, with the majority of providers having 25 or fewer. Accommodations vary and may range from rooms in multistory buildings in the inner city to rooms in detached homes in suburban residential neighborhoods. Veterans may sleep in barracks-style bunk beds in a room shared by several other participants or may have their own rooms. Figure 2 shows the exteriors and interiors of selected GPD buildings we visited. Generally housing is either male only or has separate sleeping areas for males and females. Multipurpose rooms may be available for television, games, and conversation, as well as communal kitchen facilities where meals can be purchased or made by the participants themselves. Not all GPD providers supply food. Some may assist the participants in obtaining items from community food banks. GPD providers may require veterans to pay rent, but the rent cannot exceed 30 percent of a veteran’s income, after deducting the costs of medical, child care, and court-ordered payments. In addition, veterans may be charged fees for other services not supported by the GPD grant, such as cable television. According to VA rules, veterans may stay with a single GPD provider for 24 months or longer under certain conditions. GPD providers may specify shorter limits such as 3, 6, or 12 months. In fiscal year 2005, the average stay for veterans was about 4 months with a single GPD provider. To meet VA’s minimum eligibility requirements for the program, individuals must be veterans and must be homeless. A veteran is defined as an individual who has been discharged or released from active military service and includes members of the Reserves and National Guard with active federal service. Although the GPD program definition excludes individuals who have received a dishonorable discharge, it is less restrictive in terms of length of service requirements. As a result, some homeless veterans may be eligible for the GPD program and not eligible for VA health care. VA does not pay for spouses and children of veterans who are not themselves veterans, but they may be served by GPD providers using other funds. Consistent with the definition used in many other federal programs, VA defines a homeless individual as a person who lacks a fixed, regular, adequate nighttime residence and instead stays at night in a shelter, institution, or public or private place not designed for regular sleeping accommodations. Prison inmates are not deemed homeless, but may be at risk of homelessness and may be eligible for the program upon their release. GPD providers determine if potential participants are homeless, but VA officials determine if potential participants meet the program’s definition of veteran. VA officials are also responsible for determining whether veterans have exceeded their lifetime limit of three stays in a GPD program and for issuing a waiver to that rule when appropriate. Prospective GPD providers may identify additional eligibility requirements in their grant documents. Because the providers are responsible for providing a clean and sober environment that is free of illicit drugs, about two-thirds of providers require that veterans entering the program be sober and free from alcohol and drug use for a given length of time. The time frames set by many providers range from 1 to 30 days of sobriety. Many providers also conduct drug tests of veterans after they enter the program to ensure their continued sobriety. Most providers will not accept veterans considered to be a danger to themselves or others, in need of detoxification, or under the influence of drugs or alcohol. About one-fifth of providers also exclude veterans who are considered seriously mentally ill, because the providers may not be able to provide adequate care. The GPD program is focused primarily on helping those most in need— veterans who might remain homeless for long periods of time if no intervention occurs—and is not intended to serve all homeless veterans. About two-thirds of homeless veterans in the program in fiscal year 2005 had struggled with alcohol, drug, medical, or mental health problems. About 40 percent of homeless veterans seen by VA had served during the Vietnam era, and most of the remaining homeless veterans served after that war, including over 2,500 who served in military operations in the Persian Gulf, Afghanistan, and Iraq. Almost all homeless veterans seen by VA are males; about half are between 45 and 54 years old, one-quarter are older, and one-quarter are younger. African-Americans are disproportionately represented, constituting the largest racial group at 47 percent; whites are the next largest group at 45 percent. About 75 percent of veterans are either divorced or never married. The complex problems faced by homeless veterans require a system of comprehensive, integrated services that often involves multiple organizations. Key federal agencies with programs specifically targeted to the homeless, including veterans, are HUD, the Department of Health and Human Services (HHS), and the Department of Labor (DOL). HUD makes funds available to bring together community organizations to plan and coordinate service delivery through local or regional networks designated as the “Continuums of Care.” In their planning role, the Continuums arrange for counts of the homeless in their area, and since 2003, are required to report the number for a given point in time and to do so at least every 2 years. Further, as part of their coordination role, the Continuums review agency applications for certain HUD grants. HUD also funds emergency shelters that are open seasonally or year-round for temporary, overnight accommodations. In addition, HUD is the only federal agency that is authorized to provide permanent subsidized housing for the homeless. HHS specializes in funding health care and researching the needs of homeless with substance abuse and mental health issues. DOL, like VA, has programs targeted specifically to veterans within the homeless population, with DOL’s emphasis on helping veterans obtain employment. Charities, businesses, and state and local governments are also involved in meeting the needs of homeless veterans and, in some cases, providing funding to GPD providers. At the federal level, VA works with these and other federal agencies through two key committees. VA’s Advisory Committee on Homeless Veterans is responsible for assessing the needs of homeless veterans and determining if VA and others are meeting these needs. The committee comprises homeless veterans, experts and advocates, community-based service providers, state and federal government officials, and representatives of veterans’ service organizations. The committee has made several recommendations on improvements to homeless veterans’ programs, including the GPD program, some of which have been implemented. In 2004 the committee urged VA to fund GPD providers serving veterans with special needs, especially female veterans; in fiscal year 2005 there were 29 programs of this kind, including 8 for female veterans. VA is also a participant on the Interagency Council on Homelessness, which coordinates the federal response to homelessness and works with state and local governments to develop plans for ending chronic homelessness among individuals, including veterans, in 10 years. Although the chronic homeless represent only 10 to 20 percent of all homeless adults, they take up roughly half of all shelter beds and also use a disproportionate share of resources for the homeless. At the local level, VA works with various agencies through the Community Homelessness Assessment, Local Education and Networking Groups for Veterans, referred to as Project CHALENG. An arrangement of this kind is needed, according to VA, because no single agency can provide the full range of services required to help homeless veterans become more productive members of society. Through CHALENG, a designated VA official in each medical center, usually VA’s homeless coordinator, reaches out to community agencies that provide services to the homeless to raise awareness of homeless veterans’ particular needs and to plan to meet those needs. Specific needs to be addressed include outreach, counseling, health care, education and training, employment, and housing. Every year these VA officials prepare estimates of the total number of homeless veterans in their area, based on input from various sources. In addition, the officials meet with community representatives to complete a survey of available resources, additional resources needed, priorities for service, and an action plan. VA estimates that on a given night in fiscal year 2005 about 194,000 veterans were homeless. The estimate, generally lower than the numbers reported prior to 2004, is considered by VA officials to be the best estimate available. VA officials believe that a new methodology and use of local HUD data has improved the estimate, although some homeless veterans may not have been included because they could not be found when the estimate was developed. While VA has increased its capacity to provide transitional housing for homeless veterans in recent years, its program planning efforts indicate that an additional 9,600 transitional housing beds from various sources are needed to meet current demand. VA officials report that they are working to operationalize an additional 2,200 beds for the GPD program. VA bases its national estimate of homeless veterans on the summation of local estimates developed by VA officials for the areas served by VA medical facilities. This process is part of the annual CHALENG planning effort, which involved 135 local VA officials in 2005. Local VA officials are not responsible for conducting their own counts of homeless veterans, but are expected to rely on data from other groups that have collected these data. More than 75 percent of VA officials use multiple data sources, in part because the areas covered by VA medical facilities often comprise several cities, counties, or even states, while local data sources may cover one or more of these jurisdictions, but rarely cover the full area served by the medical facility. Most often, local VA officials rely on data collected by the HUD-funded Continuums of Care, local governments, university researchers, or other groups along with information from local homeless providers. The estimates reported by local VA officials are compared to the previous year’s and if they have significantly changed, the local VA officials are asked to explain the differences before their estimates are incorporated into the national figure. Prior to 2004, local VA officials used a methodology to develop their estimates that was the equivalent of mixing apples with oranges and, as a result, yielded less consistent, reliable counts of the homeless veteran population. This mixed methodology combined cumulative numbers such as the total who were homeless over the course of a year with point-in- time numbers involving the number homeless on any given day or night. The numbers were not comparable because over the course of a year some individuals who were not homeless when the counts were conducted later became homeless. Generally, the number of veterans who are homeless sometime over the course of a year is larger than the number who are homeless on any given night. Since 2004, local VA officials have been directed to use point-in-time data exclusively in developing their estimates to reflect the number of homeless veterans on any given day of the year. VA reports that this standardized method yields more reliable estimates than were developed for earlier years, although there may be some veterans who cannot be located. Figure 3 shows VA’s estimates of the homeless veteran population from fiscal years 2000-2005. Recent estimates are also likely to be more reliable, according to VA, because local VA officials increasingly use homeless data from counts funded by HUD’s Continuum of Care, which are believed to be more accurate. In 2005, more than twice as many local VA officials used HUD counts as was the case in 2003. HUD-funded counts in many communities are gradually improving as the census takers increasingly seek out the “hidden” homeless who do not contact service providers as well as the homeless who congregate at soup kitchens and shelters. In both Atlanta and Los Angeles, homeless individuals were hired in 2005 to assist the census takers in locating areas where homeless individuals could be found. As a result, the local counts that were conducted in these two communities were more accurate than the counts conducted in earlier years, according to VA officials. Although VA officials believe that the number is likely an underestimate, VA officials consider their 2005 year estimate of 194,000 homeless veterans on any given night to be the best available. Counting the homeless is a challenge for several reasons, as VA and other agencies have acknowledged, since the homeless are hard to locate and some may not be included in the current estimate. Also, the number may change in relation to social and economic factors, such as job layoffs or a tighter housing market. In addition, veterans who are doubled up and sharing crowded living quarters with others are considered at risk of becoming homeless but are not included in the counts because they do not meet VA’s definition of homeless. Since fiscal year 2000, VA has almost quadrupled the number of available beds and the number of admissions of homeless veterans to the GPD program in order to address some of the needs identified through the CHALENG survey. In fiscal year 2005, VA had the capacity to house about 8,000 veterans on any given night. However, over the course of the year, because some veterans completed the program in a matter of months and others left before completion, VA was able to admit about 16,600 veterans into the program. Figure 4 illustrates the growth in GPD program capacity from fiscal years 2000 through 2005. VA has pursued a policy of making GPD beds available in all states and the District of Columbia, in line with the recommendation made by the VA Advisory Committee on Homeless Veterans. As shown in figure 5, all but three states had beds available in May 2006, and VA officials told us that they were working with potential providers to develop the capacity in these states. The greatest number of beds is in California (1,867 beds); Florida and Massachusetts (430 and 378 beds, respectively); and New York, Ohio and Pennsylvania (274, 261, and 332 beds respectively). VA’s CHALENG report found that about 45,000 transitional housing beds were needed in fiscal year 2005 to help homeless veterans become more socially and economically independent. As shown in table 1, the report identified over 35,000 transitional housing beds that were available through various sources for this purpose—-including the GPD beds, another 2,400 beds funded by VA through its other specialized homeless programs, and additional beds funded by other sources. Still needed were about 9,600 more transitional housing beds nationwide beyond the number currently available to meet the demand in fiscal year 2005. To begin to address the demand, VA officials told us that, as of May 2006, they have negotiated an additional 2,200 beds for the GPD program that are expected to be available in the near future. Although VA reports the need for transitional housing beds is greater than the capacity, the demand varies throughout the year and by location. Some GPD programs we visited had vacancies and others had waiting lists at the time of our visit. GPD providers and VA officials identified several reasons that beds may go unfilled at any given time. Some beds are held for veterans who are receiving medical treatment, while others may be unfilled as a result of the normal transition when one veteran has left the program and another veteran will soon be entering the program. VA officials and GPD providers also told us they expect a change in the demographics of homeless veterans that may require them to reconsider the type of housing and services that they are providing with GPD funds. Specifically, VA officials expect to see more homeless women veterans and more veterans with dependents who are in need of transitional housing. GPD providers told us that women veterans have sought transitional housing; some recent admissions had dependents; and a few of their beds were occupied by the children of veterans, for whom VA could not provide reimbursement. To meet the needs of homeless women veterans, VA has provided additional funding in the form of special needs grants to a few GPD programs. GPD providers often worked with public and nonprofit agencies to offer a spectrum of services that may help veterans meet individual and GPD program goals. While GPD providers were generally able to build successful partnerships, most of them identified resource gaps that presented challenges to helping veterans, particularly affordable permanent housing. We also found that communication issues related to program policies could prevent veterans from being offered care. Providers did not always understand eligibility requirements such as which veterans may be eligible for the program and the allowable number and length of program stays. Further, providers were not always aware of policy changes. GPD providers generally created partnerships to help prepare veterans to obtain permanent housing and, ultimately, to live independently. VA’s grant process encourages such collaboration by awarding points to GPD program applicants that demonstrate they have relationships with other organizations. GPD providers are to identify how they will provide services to meet the program’s goals—residential stability, increased skill level or income, and greater self-determination. For example, providers may identify services such as substance abuse and mental health treatment, financial counseling, employment assistance and training, transportation to appointments and job interviews, and related services. We found variation in the agencies that provided these services. According to a VA survey, most GPD providers used their own on-site staff to offer services like case management and transportation assistance. In contrast, mental health assessments were mostly handled indirectly, with 79 percent of the GPD providers using the staff of other agencies, often the VA. (More information from the survey can be found in app. III.) The GPD providers that we visited established partnerships with state and local government agencies, other federal agencies, and local community organizations. Further, several of the providers that we visited participated in the local Continuum of Care funded by HUD or in other community coalitions, taking advantage of community networks that serve homeless individuals. While most providers offered a range of services, not all veterans received each service. To identify the specific services a veteran may need, providers typically worked with veterans to develop individual treatment plans that identified the veteran’s needs on entering the program. Table 2 lists examples of services and partners of GPD providers we visited. GPD programs often collaborated with VA and others to provide health care-related services—such as mental health and substance abuse treatment, and family and nutritional counseling—to help veterans become more self-sufficient in their day-to-day activities. Several programs hosted Alcoholics Anonymous meetings and other counseling services, while some GPD programs expected veterans to attend regular meetings elsewhere in the community. At least two GPD providers we visited provided their own substance abuse treatment and did not rely on community partners to provide such services. At least two other providers that referred veterans to VA for substance abuse treatment expressed concerns about waiting lists for that service, making it hard for veterans to access care immediately. Typically, a VA local medical center provided veterans with primary and specialized health care. However, GPD providers sometimes expressed concerns about difficulties obtaining dental care. To meet the needs of veterans who were not eligible for VA health care, GPD providers made other arrangements. For example, a program in the Boston area partnered with the local hospital which provided free health care to homeless veterans who were in the GPD program but were ineligible for VA health care. We also found that many providers either used their own staff or used partners’ staff to provide mental health services and family and nutritional counseling services. All providers we visited tried to help veterans obtain financial benefits or employment. Some had staff who assessed a veteran’s potential eligibility for public benefits such as food stamps, Supplemental Security Income, or Social Security Disability Insurance. Other providers relied on relationships with local or state officials to provide this assessment. For example, a Wisconsin GPD provider worked with a county veterans’ service officer who reviewed veterans’ eligibility for state and federal benefits. The provider also had a relationship with a county employment representative who came to the GPD facility to discuss job searches, training, and other employment issues with veterans. Several providers were receiving DOL grants to provide employment training services, worked with local colleges, or relied on other local programs to help veterans to increase skills. However, a lack of available jobs in an area may sometimes pose problems to finding employment for veterans. Most of the GPD providers in the areas that we visited worked with community partners to obtain permanent housing for veterans ready to leave the GPD program, but indicated this was sometimes difficult because of limited affordable permanent housing. Some providers had established extensive partnerships with organizations that provide or find affordable permanent housing. For instance, several of the providers worked with the local HUD-funded Continuum of Care network to identify permanent housing resources. Some providers had or were applying for HUD funds to build single room occupancy housing units that could serve as a transition to more permanent long-term housing. As at least one provider mentioned, veterans sometimes become resourceful and agree to share apartments. In some instances, providers have asked for an extension to allow veterans to stay until housing becomes available. GPD providers and VA staff coordinated with community resources to help address other issues that they identified that might also present obstacles for transitioning veterans out of homelessness. For example, staff in some locations indicated that such legal issues as criminal records or credit problems may preclude veterans from obtaining employment and housing. To help overcome these issues, some GPD providers worked with lawyers who provided services at no cost or other volunteer organizations. Staff in some of the locations also reported that transportation issues made it difficult for veterans to get to medical appointments or employment-related activities. To help address potential transportation difficulties, some providers received GPD grants to purchase vans. One provider that we visited partnered with the local transit company that provided subsidies to homeless veterans. This option is not always available, however, and transportation remained an issue in areas not near a medical center. VA has five staff in the national program office who administer the GPD program through a network of 21 regional homeless coordinators and 136 local VA liaisons. While program policies are developed at the national level by the GPD program staff, the local VA liaisons designated by VA medical centers have primary responsibility for communicating with GPD providers in their area. Figure 6 depicts the flow of information about the GPD program. The VA liaisons may serve in a full-time or part-time capacity, in part depending on the number of GPD beds in the area served by the VA medical centers and the number of admissions per year. In fiscal year 2006, there were 60 full-time liaisons and another 76 individuals serving as part-time liaisons in addition to their other VA duties. Liaisons sometimes found it hard to readily assist providers, according to some staff we met, because of the liaisons’ large caseloads and multiple GPD responsibilities—including eligibility determination, verification of intake and discharge information, case management, fiscal oversight, monitoring program compliance and inspections of GPD facilities, among other duties. To help address this issue, VA has set aside additional funding for more full-time liaisons. The program office communicates with GPD providers and VA liaisons through written guidance and teleconferences. VA provides liaisons with a guidebook about their responsibilities and the program rules as well as a manual prepared by NEPEC on the forms to be completed for all program participants. To stay up-to-date on GPD program policies, liaisons participated in monthly conference calls and also had the opportunity to attend a conference conducted by the GPD program office in 2004. The program office recently held a training seminar for new liaisons and also offers training via phone. VA also gives GPD providers program handbooks and holds monthly conference calls to discuss program rules. In addition, some of the VA medical centers we visited held meetings with local GPD program providers in their areas to share information. Despite VA’s efforts, we found that some providers did not understand all of the GPD program policies. Some misunderstandings could affect a veteran’s ability to get—and a GPD provider’s ability to offer—care. For instance, two providers said that VA staff told them that veterans eligible to participate in the GPD program were also required to be eligible for VA health care, but this is not the case. Similarly, in another location, the local VA liaison and a provider both told us that they had received information from the GPD program office indicating that the total lifetime length of stay was 2 years, but the GPD program officials told us this interpretation of the information that they provided is incorrect. Elsewhere several providers understood the lifetime limit of three GPD stays but may not have known or believed that waivers to this rule could be granted. They argued that the limit could hinder a veteran’s ability to participate in the GPD program if participation involved phased care offered by separate GPD providers, each specializing in certain phases of treatment, such as detoxification or job preparation. Since each phase of treatment is counted as one GPD stay, veterans may exhaust their 3-stay limit before they have received services vital to their improved functioning. Although VA has the authority to waive the 3-stay limit in such cases, these providers did not seem to understand that this option was available to them. In addition, providers were not always aware of changes in the GPD program in a timely fashion; sometimes not at all. For example, not all GPD providers knew in 2006 that their program’s inspections would include a review of whether they were meeting the objectives described in their GPD grant documents. VA recognizes that communication to providers and liaisons needs to be improved. In its fiscal year 2005 report, the VA Advisory Committee on Homeless Veterans recommended that VA hold an annual conference and that each GPD provider have an opportunity to attend at least one such conference. The purpose of the conference would be to improve communications, program compliance, and treatment strategies. In the spring of 2006 when the committee reconvened, VA had not yet accepted the committee’s recommendation. VA data show that in fiscal years 2000-2005 a steady or increasing percentage of veterans had stable housing, income, and greater self- determination at the time they left the GPD program. These national performance results are derived from standard forms filled out by VA staff or by provider staff with VA’s review and sign-off for every veteran who leaves the program for any reason. While the veterans’ success is VA’s primary measure of program performance, in 2006 VA took steps to ensure that the performance of individual GPD providers would also be reviewed, in line with a recommendation of VA’s Office of Inspector General (OIG). Some GPD providers we visited had stated in their grant documents that a certain percentage of veterans they served would have permanent housing or employment a year after they left the program. Also, VA recently completed a onetime study looking at longer-term outcomes for homeless veterans, including 520 who participated in the GPD program, and preliminary results show that positive housing outcomes were maintained 1 year after veterans left the GPD program. However, VA does not routinely collect follow-up information to determine the status of participants at specified times after they leave the program and may not be able to rely on the results of its study to determine the success of future program participants. The following sections compare VA’s GPD performance data from fiscal year 2005 with data from fiscal years 2000 through 2004. VA reports that about 81 percent of veterans had arranged some form of housing at the time they left the GPD program in fiscal year 2005, a significant improvement over the 56 percent with housing in fiscal year 2000. VA considers the program successful if veterans have obtained either independent or secured housing. Independent housing comprises apartments, rooms, or houses, while secured housing includes transitional housing programs, halfway houses, hospitals, nursing homes, or similar facilities. Most of the improvement in housing outcomes has occurred in independent housing. While independent housing may be a more desirable outcome, for some veterans, including those with severe disabilities, secured housing may be more appropriate. Figure 7 shows the percentages of veterans who had arranged housing when they left the GPD program in fiscal years 2000 through 2005. In its annual reports, VA compares the housing arrangements of veterans who successfully met provider requirements with those who did not. As might be expected, proportionately more veterans who met requirements had obtained independent housing in fiscal year 2005—-nearly 70 percent—compared to the 40 percent with independent housing who had not met provider requirements. In terms of numbers, about half of the 15,000 veterans who left the program in fiscal year 2005 were considered by the GPD providers to have met program requirements, an improvement over earlier years. Of the approximately 7,500 veterans remaining, about half dropped out and the other half violated program rules, such as rules on maintaining sobriety, or they left for other reasons. VA derives this information from discharge forms completed by VA or GPD staff for all veterans at the time they leave the program. VA’s evaluation center NEPEC aggregates this data and prepares annual reports on overall GPD program performance. For more on this process, see appendix IV. The program goal of increased income can be achieved through maintaining or obtaining employment or financial benefits such as VA disability compensation or pensions, Supplemental Security Income, or food stamps. From fiscal years 2000 to 2005, about one-third of veterans had jobs, mostly on a full-time basis, when they left the GPD program. The number of veterans with jobs more than tripled over the period, with about 4,900 employed in fiscal year 2005 at the time they left the program. The number of veterans receiving VA benefits when they left the GPD program was about 3,800, while another 2,200 veterans had applied or planned to apply for VA benefits. Table 3 shows the percentages and numbers of those employed or receiving benefits for fiscal years 2000 through 2005, but VA did not have data on receipt of benefits until 2003. To track greater self-determination, VA examines such goals as veterans’ progress in handling of alcohol, drug, mental health, and medical problems and overcoming deficits in social or vocational skills. A greater proportion of veterans leaving the program each year have met these goals, with 57 to 69 percent showing improved functioning in fiscal year 2005, as shown in figure 8. These improvements have occurred while the proportion of veterans who entered the GPD program with a history of such problems remained constant or increased. Specifically, the proportion entering with substance abuse problems who left the program in fiscal years 2000 through 2005 remained relatively constant, while the proportion of veterans with a history of mental or medical illness more than doubled, according to VA data. See table 4. In addition to assessing the program through the success of its veterans, VA policy calls for all VA liaisons to review the performance of individual GPD providers in meeting objectives that are identified in their grant documents. Providers are required to establish specific measurable objectives for each of the three program goals. To reach the housing goal, for example, some providers we visited established savings objectives, requiring veterans to set aside a portion of any income they receive so that they can accumulate sufficient cash reserves to cover costs of renting a room or apartment when they leave the program. Most providers we visited also set outcome objectives for the percentage of veterans expected to obtain independent housing when they left the program. For the income goal, some providers set objectives requiring that a certain percentage of veterans be offered or enrolled in vocational training, develop résumés, interview for jobs, or apply for entitlement benefits. Most providers also set objectives that a certain percentage of veterans would find work. For the self-determination goal, some providers required that a certain percentage of veterans maintain sobriety or attend weekly Alcoholics or Narcotics Anonymous meetings. In its 2006 examination of the GPD program, VA’s OIG found, however, that many providers had not tracked their performance in achieving these objectives and some VA liaisons had not reviewed the providers’ performance. The OIG recommended that VA liaisons ensure that the providers’ performance be monitored. The GPD program office has since moved to enforce the requirement that VA liaisons review GPD providers’ performance when the VA team comes on-site each year to inspect the GPD facility. The VA liaison will have the flexibility to determine the method for reviewing and recording the providers’ performance, so long as the results are documented. GPD providers who do not meet performance objectives will be required to work with their local VA staff to create a corrective action plan or resubmit their applications with new objectives. VA does not require that veterans be contacted for purposes of program evaluation after they leave the GPD program. With a view to the long-term health of veterans, however, VA attempts to have its clinicians provide GPD participants with a substance abuse or mental health assessment within 2 months of leaving the program. In addition, the forms completed when veterans leave the GPD program identify any follow-up that may have been arranged to help them continue to cope with problems that they have experienced. While follow-up is not required, about 80 percent of GPD providers reported that they conduct some sort of follow-up with veterans after they leave the GPD program. Providers may call veterans who have left, obtain data on those who return for additional support services, or arrange reunions or other gatherings. Some grant documents also indicate that the providers planned to measure their performance, in part by following up with veterans from 3 to 12 months after they left the program. Some providers follow up to meet the requirements of non-VA funding they receive. Several providers we interviewed had DOL grants requiring them to report the employment status of veterans 3 and 6 months after they left the DOL program. These providers were able to report results for the veterans deemed employable who participated in both the GPD and DOL programs. However, GPD participants who were deemed unemployable because of their disabilities may not have been included in the DOL program. While many providers attempt to follow up with veterans, several told us that it is sometimes difficult to maintain contact, especially with veterans lacking telephones or reliable mailing addresses and with veterans who have moved away from the area. While VA considers it important for veterans to achieve immediate success on leaving the GPD program, homeless veterans may experience setbacks later on that may negatively affect their housing arrangements, employment and financial benefits, and self-determination. Furthermore, veterans who were not immediately successful on leaving the program nevertheless may have benefited from participating and may be able to achieve success at a later time. To explore the long-term outcomes of program participants, VA funded a onetime follow-up study in May 2001 to examine the outcomes for a randomly selected sample of about 1,300 veterans spread across five geographic locations who were participating in the GPD program and two other VA-sponsored homeless programs. According to a VA official, the cost of the study was about $1.5 million. Included in the sample were 520 veterans housed with 19 GPD providers. Proportionately more veterans in the GPD programs were chronically homeless, while veterans in one of the other programs had higher levels of serious medical and psychiatric problems and greater impairments. At the time of selection, the veterans had various lengths of stays in these programs. For the study, university and RAND Corporation researchers interviewed veterans to determine their status at 1, 3, 6, and 12 months after they left the programs, with the last interviews conducted in October 2005. About 360 of the former GPD participants responded to the last interviews. VA officials do not expect to release final results of the study until 2007, but preliminary results show that just over 80 percent of the GPD participants had housing 12 months after they left the program. Other outcomes that are expected to be included in the report are the number of days that the veterans have either been housed or homeless, their income and employment situation, their use of drugs and alcohol, their physical and mental health status, and quality of life. Addressing homelessness is a daunting challenge, given the difficulties associated with identifying those who need help and the broad spectrum of services that need to be successfully tailored, coordinated, and delivered in order to enable individuals and even families to secure permanent housing and to live more independently. Limited resources— particularly the availability of affordable permanent housing—make this job even more difficult. Moreover, the physical and emotional conditions including substance abuse, and mental illness, prevalent in the homeless veteran population further increase the difficulty. VA has taken a number of steps to tackle this challenge by enhancing its ability to estimate how many veterans need assistance, increasing the number of GPD beds, instituting measures that help gauge the program’s effectiveness, and through the GPD program, working proactively with local and federal government agencies and nonprofits to provide the assistance needed. However, more could be done to optimize VA’s investment, particularly with respect to ensuring policies and criteria are clearly understood and consistently applied and assessing longer-term outcomes. In enhancing communications, VA will need to identify effective ways of sharing information with the more than 100 agency liaisons in addition to the 300 local GPD program providers—each with a potentially different means of operating. In assessing longer-term outcomes, VA will need to weigh the costs, benefits, and feasibility of implementing a variety of analytical approaches. Clearly, these endeavors will not be easy, but they are critical to better equipping VA to help homeless veterans. We recommend that the Secretary of Veterans Affairs take the following two steps to improve and evaluate the GPD program: 1. To aid GPD providers in better understanding the GPD policies and procedures, we recommend that VA take steps to ensure that its policies are understood by the staff and providers who are to implement them. For example, VA could make more information, such as issues discussed during conference calls, available in writing or online, hold an annual conference, or provide training that may also include local VA staff. 2. To better understand the circumstances of veterans after they leave the GPD program, we recommend that VA explore feasible and cost- effective ways to obtain such information, where possible using data from GPD providers and other VA sources. For example, VA could review ways to use the data from its own follow-up health assessments and from GPD providers who collect follow-up information on the circumstances of veterans whom they have served. We provided a draft of this report to VA for review and comment. VA agreed with our findings and concurred with our recommendations and provided information on initiatives it has under way or planned that will address issues raised in our report as well as other challenges the GPD program faces. VA concurred that there is an apparent lack of consistency in GPD program implementation and stressed its commitment to further enhance communications with VA liaisons and GPD providers, including providers whose operations are still in the developmental stage. For example, VA plans to develop a comprehensive GPD implementation plan that will address several operational issues, including training and certification requirements. As well, for the first time, the VA’s Veterans Health Administration plans to host a conference or series of regional conferences for GPD providers and VA liaisons to review program requirements and expectations. VA estimates these conferences will take place in spring 2007. VA also concurred with the need to better understand the circumstances of veterans after they leave the GPD program and stated that it has plans in place to address optional approaches for long-term study in this area after it completes an analysis of its longitudinal outcome studies of VA’s homeless program. In the interim, VA said it would continue to explore options for using existing data to evaluate program effectiveness. However, the agency disagreed with the statement in our draft report that VA officials attribute the decrease in the estimates of homeless veterans to VA’s estimation process and better local data. VA believes that the recent decrease in the estimates is a direct result of its progress in treating these veterans through the GPD program. Several factors may have contributed to the decrease in the estimates of homeless veterans. We did not intend to imply that the decrease was solely attributable to changes in VA’s estimation process and better local data, nor did we intend to downplay VA’s program successes. We have revised the language in this report accordingly. VA’s written comments appear in appendix V. VA also provided technical comments, which have been incorporated into the report as appropriate. We are sending copies of this report to the Secretary of Veterans Affairs. We will also make copies available to others on request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs can be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. The objectives of this report were to review (1) Department of Veterans Affairs (VA) estimates of the total number of homeless veterans and the number of transitional beds available, (2) the extent of collaboration involved in the provision of Homeless Providers Grant and Per Diem (GPD) program and related services, and (3) VA’s assessment of GPD program performance. In conducting our review, we focused on the GPD providers that serve the general homeless veteran population rather than those serving veterans with special needs, although we visited some special needs grantees. We interviewed officials at VA headquarters, the GPD program office, the regional Veterans Integrated Service Networks, VA’s Northeast Program Evaluation Center (NEPEC), and organizations knowledgeable about homeless veterans’ issues, including the National Coalition for Homeless Veterans. To gain an initial understanding of the GPD program in operation, we spoke with staff and toured GPD facilities in Baltimore, Maryland; Denver, Colorado; and Washington, D.C. To develop greater in- depth material for this report, we made more extensive visits to 13 GPD providers that fall under the responsibility of VA’s medical centers in Boston, Massachusetts; Los Angeles, California; Tampa, Florida; and Tomah and Madison, Wisconsin. We selected these GPD providers to obtain a range of geographic locations, size of programs, and proximity to VA medical centers. (See table 5 for a listing of sites we visited and their characteristics.) During our visits, we toured GPD facilities, interviewed GPD providers, medical center staff, community agencies that partner with the GPD providers, and current and former GPD program participants. Additionally, we interviewed staff but did not tour facilities of 16 other GPD providers in the areas we visited. We also met with GPD and other service providers at conferences sponsored by the Departments of Labor and Health and Human Services. Throughout our review, we worked with the VA’s Office of Inspector General (OIG) to ensure that we complemented but did not duplicate a review it was conducting on GPD program management. The OIG’s review was designed to determine if records demonstrate that (1) homeless veterans receive appropriate assessment and treatment, (2) GPD provider performance is evaluated and actions are taken to improve conditions, (3) GPD providers achieve their stated goals, (4) VA’s guidelines for the inspection of GPD facilities are followed, (5) GPD operations are properly monitored by VA, and (6) fiscal controls are adequate. Although the OIG’s report was not available at the time we prepared our report, we were briefed on results that were relevant to our work and incorporated the information as appropriate. In addition, we discussed with the OIG’s team our selection of sites to visit and chose sites that were not included in the team’s review. In reviewing VA estimates of the number of homeless veterans, we reviewed the literature, read relevant reports, and interviewed VA officials, particularly those involved in the federally mandated Community Homelessness Assessment, Local Education and Networking Group for Veterans (CHALENG). We interviewed experts in the subject area and officials with the Bureau of the Census and the Department of Housing and Urban Development (HUD). We used information from our site visits to supplement our discussion on how local entities conduct counts of homeless individuals. We did not review the validity of VA’s estimates. To identify GPD program capacity, location, and number of admissions, we analyzed data from a series of annual reports prepared by NEPEC, updated where appropriate by information from the GPD program office in May 2006. To assess the overall extent to which GPD providers collaborated with other agencies to offer services to homeless veterans, we analyzed NEPEC survey data. The survey included responses from all GPD providers in 2003, when NEPEC first conducted the survey, and all programs that became operational or were funded in subsequent years through November 2005. For more information on the survey data, see appendix III. We performed basic reasonableness tests on the survey data and contacted NEPEC for any clarifications or discrepancies. We determined these data to be sufficiently reliable for the purposes of this report. To get an understanding of how collaboration was actually occurring at the local level, we conducted site visits. During these visits we gathered information on the types of services GPD providers offer, how providers partnered with local agencies (including VA) to offer services, and how these partnerships were working. To review how VA coordinates with other federal agencies, we attended a meeting of VA’s Advisory Committee on Homeless Veterans, talked with a representative from the Interagency Council on Homelessness, and contacted other prominent federal partners. To identify how VA assesses the performance of the GPD program, we reviewed GPD program goals, interviewed VA officials, including a team with the OIG, and analyzed data obtained from VA’s national program office and NEPEC. We reviewed the Grant and Per Diem Program Evaluation Procedures Manual that NEPEC sends to each VA liaison that describes the responsibilities of liaisons and GPD providers in completing, reviewing, and submitting intake and discharge forms on individual participants. We extracted data on outcomes from tables included in NEPEC’s series of annual reports on the program and discussed the reliability of these data with NEPEC officials. This information is briefly summarized in appendix IV along with relevant findings from the OIG’s review. We did not independently verify the NEPEC data. We reviewed how VA collects and analyzes outcome data and found these data to be sufficiently reliable for our purposes. Additionally, we reviewed grant documents for the sites we visited to identify the specific objectives they set to meet program goals and asked VA officials and providers about various aspects of performance measurement during our site visits. We did not conduct our own review of outcomes for homeless veterans served by the GPD providers we visited. At the time we conducted our analysis, VA’s follow-up study had not been released; therefore, our discussion of the study is based on our review of preliminary results that identified the numbers and characteristics of the participants, the timetable and roles of the universities and researchers involved, and the housing outcomes at the end of the year. Conducted from 2001 through 2005, the study followed a total of 1,294 participants, with approximately 260 participants from each of five medical center areas serving California, the District of Columbia, Florida, Maryland, Ohio, Pennsylvania, and West Virginia. Veterans were randomly selected from lists of active participants that included recent admissions as well as participants with longer stays in the program. Participants were drawn from programs operated by 6 domiciliary care providers, 16 contracted residential treatment providers, and 19 GPD providers. The study had an overall response rate of 72 percent for all participants in the three transitional housing programs, with a response rate of 69 percent for the GPD participants, for the interviews conducted a year after they left the program. Of the 520 GPD participants studied, 359 were interviewed a year after leaving the program. Of those interviewed, 60 percent were in their own independent housing, 23 percent were sharing with friends or family, and 15 percent were in temporary housing, including shelters or in an institution other than a jail. We conducted our work between August 2005 and July 2006 in accordance with generally accepted government auditing standards. Under the HCHV umbrella program, VA provides outreach, health and mental health assessments, treatment, and referrals for homeless veterans with mental health and substance abuse problems. Veterans with limited length of service or with other than a dishonorable discharge are eligible for the HCHV program but may not necessarily be eligible for VA health care, where the criteria are more restrictive. A veteran needing transitional housing while undergoing treatment may be placed in one of the approximately 300 contracted residential treatment beds that are funded from the budgets of individual medical centers. In fiscal year 2005, there were about 1,700 admissions for an average stay of 2 months at $36 per day; the recommended maximum stay is 6 months. Where contracted residential treatment is not available, veterans in need of transitional housing may be referred to the more widely available GPD program or domiciliary care. In fiscal year 2005, VA’s HCHV program provided outreach, treatment, and referral services to about 61,000 homeless veterans, with obligations of about $40 million. This transitional housing program is designed for homeless veterans who do not need hospital or nursing home services while their clinical status is being stabilized. In this program, veterans receive various services, including medical and mental health evaluations, treatment, and community support. Domiciliary programs are generally located on the grounds of VA medical centers, and unlike the GPD programs, they are usually managed and staffed by the local VA medical center. In fiscal year 2005 about 5,000 homeless veterans stayed an average of 4 months in this program. About 1,800 beds were available exclusively for homeless veterans, with obligations of about $58 million. Additional funding was awarded in 2005 to increase the number of beds available to about 2,200 in fiscal year 2007, bringing total obligations up to a projected $73 million. This work therapy program provides veterans with job skills and income. Through the program veterans produce items for sale or provide services such as temporary staffing to a company. While participating in this program, veterans may receive individual or group therapy and follow-up medical care on an outpatient basis. At some locations, program participants can stay in one of the about 500 beds available in transitional, community-based group homes. Veterans participating in this program are required to use a portion of their income from the work program to pay for rent, utilities, and food. Obligations for this program in fiscal year 2005 were about $10 million. This transitional housing program provides guaranteed loans to nonprofit organizations to construct or rehabilitate multifamily transitional housing for homeless veterans, including single room occupancy units. Supportive services and counseling, including job counseling, must be provided with the goal of encouraging self-determination among participating veterans. Veterans must maintain sobriety, seek and maintain employment, and pay a fee in order to live in these transitional units. Not more than 15 loans with an aggregate total of $100 million may be guaranteed under this program. In fiscal year 2005, the Vietnam Veterans of San Diego housing project was under construction. Other programs have been conditionally selected and are expected to be approved in fiscal years 2006 and 2007. For information on the challenges encountered in implementing this initiative, see Related GAO Products for GAO’s report on this program. This permanent, subsidized housing program provides HUD rental assistance (Section 8) vouchers for use by homeless veterans with chronic mental health or substance abuse disorders. Veterans are required to pay a portion of their income for rent; those without income receive fully subsidized housing. In general, veterans who do not exceed the maximum allowable income can remain in the housing permanently, but must agree to intensive case management services from VA staff and make a long- term commitment to treatment and rehabilitation. Local housing authorities control access to the vouchers. Many of the 1,780 vouchers allocated by HUD remain in use but no new vouchers have been made available. As a result, in fiscal year 2005, only 142 veterans were admitted to the program. VA’s obligations in support of this program in fiscal year 2005 were about $3 million. According to VA, in 20 of its 57 regional offices VA has designated full-time homeless veterans coordinators who work with HCHV and other VA staff to conduct joint outreach, provide counseling, and offer other services to homeless veterans, such as helping them apply for veterans benefits. In the remaining regions, staff may be assigned collateral responsibility to work with homeless veterans. One of the goals of this program is to expedite the processing of benefit claims made by homeless veterans. According to VA, in fiscal year 2005, VA received approximately 4,400 claims from homeless veterans. Of these claims, 56 percent were for disability compensation and 44 percent were for pensions. Of the compensation claims, 26 percent were granted, 33 percent denied, and 41 percent pending an average of about 4 months. Of the pension claims, 62 percent were granted, 18 percent denied, and 21 percent pending an average of about 3 months. VA properties that are obtained through foreclosures on VA-insured mortgages are available for sale at below fair market value to nonprofit and public agencies that use the properties to shelter or house homeless veterans. Since the inception of this program, more than 200 properties have been sold or leased. Under this demonstration program, the Department of Labor (DOL) funds community agencies to provide training and support services, and VA contributes its services, to help veterans who are incarcerated and at risk of homelessness make a successful transition back into the workforce. According to DOL, services provided include career counseling, employment training, job-search and job-placement assistance, life-skills development, and follow-up. Local staff from both VA’s Health Administration and Benefits Administration provide information about available VA benefits and services. Grantees must report the number of veterans who are still employed 6 months after job placement, whether they are in the same or similar jobs, and the reasons why veterans who were placed are no longer employed. DOL provided $2 million to seven community agencies in 2006 for this purpose. We analyzed NEPEC’s Facility Survey data to identify the types of services that programs provide and how they are provided. NEPEC conducted the survey to capture information on the types of GPD programs funded. According to NEPEC officials, the survey was used to capture information such as program location, admissions criteria, services available, and licensing. Because the survey was not intended to be used as a tool to review how programs were performing, NEPEC does not conduct rigorous internal reviews of the data collected. We conducted basic reasonableness tests and contacted NEPEC for any clarifications or discrepancies. We found the survey data sufficiently reliable for the purposes of this report. The survey was first deployed in 2003 to all agencies that were receiving funding that year. In subsequent years, NEPEC had newly funded agencies complete this onetime survey. A total of 281 transitional housing facilities were included in the survey data we analyzed—148 of the facilities were surveyed in 2003, 94 in 2004, and 39 in 2005. According to NEPEC, this represents all operational programs as of November 2005. While there were about 300 agencies with GPD grants, some of the agencies have multiple grants for one facility, resulting in one survey being completed for that facility. The surveys were completed by the VA liaisons in consultation with GPD provider staff. NEPEC officials were confident they have achieved a 100 percent response rate. While we did not independently verify the response rate for the survey, we concluded that it would be at least 90 percent. Table 6 shows the percentage of facilities that reportedly provide the selected services and how the services were provided. Survey respondents were asked to identify how, if at all, services were provided and were directed to choose only one method. It may be the case, however, that as in some locations we visited, services were provided by more than one method. As can be seen, the majority of GPD programs provided a spectrum of services for veterans. However, these programs varied in how services were provided, with some services more likely to be provided through partnerships and others more likely to be provided in-house directly by staff. Some of the services that were more likely to be provided through partnerships include those that require counseling or medical- related treatment. Services primarily provided directly by GPD providers tended to be more related to case management type activities. Outcomes are reported on a standard Northeast Program Evaluation Center discharge form that must be filled out by VA staff or by GPD staff with VA’s review and sign-off when the participant leaves the program. The form also captures information on the length and cost of stay in the GPD, reasons the participant left the program, and any plans for follow-up treatment for substance abuse or other problems. NEPEC officials told us that they do not verify the data submitted to them, but they do perform tests for completeness and internal consistency. VA’s Office of Inspector General (OIG) found that not all outcomes shown on the discharge forms were supported by additional information in the sample of case records that the OIG reviewed. For example, 76 percent of records included information supporting the veterans’ outcomes indicated on the form, but about 24 percent of records lacked such support. Outcomes for housing and income are shown as a percentage of all participants who left the program for any reason. However, outcomes for self-determination in terms of improved functioning are shown as a percentage of those veterans who had an identified problem when they entered the program. The determination that a participant has or has not improved may be considered somewhat subjective. The problems are described by participants themselves to VA staff in response to a series of questions on a standard NEPEC intake form that also includes a section for the VA clinical staff to record their observations of the substance abuse or mental health problems that the participants face. The intake form also captures other characteristics of the participants, such as their military, financial and living circumstances. VA staff are expected to complete these forms when they first contact homeless veterans but no later than the veterans’ third day with a GPD provider and to forward the forms to NEPEC. NEPEC reports that it does not receive intake forms for about 10 percent of participants in the GPD program each year. Shelia Drake, Assistant Director; Patricia L. Elston; David Forgosh; and Nyree M. Ryder made significant contributions to this report. In addition, Roger Thomas provided legal assistance; Walter Vance and Lynn Milan analyzed and assessed the reliability of data; Lily Chin, Jonathan McMurray, and Charles Willson assisted in report development; and Amy Sheller supported the team during its Los Angeles site visit. Homeless Veterans: Job Retention Goal Under Development for DOL’s Homeless Veterans’ Reintegration Program. GAO-05-654T. Washington, D.C.: May 4, 2005. Veterans Affairs Homeless Programs: Implementation of the Transitional Housing Loan Guarantee Program. GAO-05-311R. Washington, D.C.: March 16, 2005. VA Health Care: VA Should Expedite the Implementation of Recommendations Needed to Improve Post-Traumatic Stress Disorder Services. GAO-05-287. Washington, D.C.: February 14, 2005. Decennial Census: Methods for Collecting and Reporting Data on the Homeless and Others without Conventional Housing Need Refinement. GAO-03-227. Washington, D.C.: January 17, 2003. Homelessness: Improving Program Coordination and Client Access to Programs. GAO-02-485T. Washington, D.C.: March 6, 2002. Homeless Veterans: VA Expands Partnerships, but Effectiveness of Homeless Programs Is Unclear. GAO/T-HEHS-99-150. Washington, D.C.: June 24, 1999. Homeless Veterans: VA Expands Partnerships, but Homeless Program Effectiveness Is Unclear. GAO/HEHS-99-53. Washington, D.C.: April 1, 1999. Homelessness: Overview of Current Issues and GAO Studies. GAO/T-RCED-99-125. Washington, D.C.: March 23, 1999. Homelessness: Demand for Services to Homeless Veterans Exceeds VA Program Capacity. GAO/HEHS-94-98. Washington, D.C.: February 23, 1994.
About one-third of the nation's adult homeless population are veterans, according to the Department of Veterans Affairs (VA). Many of these veterans have experienced substance abuse, mental illness, or both. The VA's Homeless Providers Grant and Per Diem (GPD) program, which is up for reauthorization, provides transitional housing to help veterans prepare for permanent housing. As requested, GAO reviewed (1) VA homeless veterans estimates and the number of transitional housing beds, (2) the extent of collaboration involved in the provision of GPD and related services, and (3) VA's assessment of GPD program performance. GAO analyzed VA data and methods used for the homeless estimates and performance assessment, and visited selected GPD providers in four states to observe the extent of collaboration. VA estimates that on a given night about 194,000 veterans were homeless in 2005. The estimate, generally lower than the numbers reported prior to 2004, is considered by VA officials to be the best available. VA officials believe that its new estimation process and use of better local data have improved the estimate. While VA has increased the capacity of the GPD program over the past several years, VA reports that an additional 9,600 transitional housing beds from various sources are needed to meet current demand. VA has plans to make 2,200 additional GPD beds available. GPD providers collaborate with other agencies to help veterans regain their health and obtain housing, jobs, and various services to enable them to live independently. However, resource and communications gaps may stand in the way of VA and provider efforts to meet these goals. Limited availability of affordable permanent housing, for example, may make it difficult to move veterans out of homelessness, according to GPD providers. We also identified instances of misunderstandings of program policies related to eligibility and program stay limits that could prevent homeless veterans from being admitted into the GPD program. VA assesses overall program performance by the success of veterans in attaining stable housing, income, and self-determination at the time they leave the program. VA data show that the percentage of veterans achieving these goals has generally increased or held steady over time. In 2006, VA also stepped up its assessment of the performance of GPD providers. While these assessments do not indicate how veterans fare after they leave the program, preliminary results of a onetime VA study indicate positive housing outcomes were maintained 1 year later. However, VA does not routinely collect follow-up data and may not be able to determine how veterans who were not included in the study are faring after they leave the program.